Feb 8 23:50:34.997414 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:50:34.997434 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:50:34.997446 kernel: BIOS-provided physical RAM map: Feb 8 23:50:34.997453 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 8 23:50:34.997460 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 8 23:50:34.997466 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 8 23:50:34.997474 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Feb 8 23:50:34.997481 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Feb 8 23:50:34.997489 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 8 23:50:34.997495 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 8 23:50:34.997502 kernel: NX (Execute Disable) protection: active Feb 8 23:50:34.997508 kernel: SMBIOS 2.8 present. Feb 8 23:50:34.997515 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 8 23:50:34.997522 kernel: Hypervisor detected: KVM Feb 8 23:50:34.997530 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 8 23:50:34.997539 kernel: kvm-clock: cpu 0, msr 62faa001, primary cpu clock Feb 8 23:50:34.997546 kernel: kvm-clock: using sched offset of 5018275772 cycles Feb 8 23:50:34.997554 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 8 23:50:34.997561 kernel: tsc: Detected 1996.249 MHz processor Feb 8 23:50:34.997569 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:50:34.997577 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:50:34.997584 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Feb 8 23:50:34.997592 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:50:34.997601 kernel: ACPI: Early table checksum verification disabled Feb 8 23:50:34.997608 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Feb 8 23:50:34.997616 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:50:34.997623 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:50:34.997630 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:50:34.997637 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 8 23:50:34.997645 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:50:34.997652 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:50:34.997659 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Feb 8 23:50:34.997668 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Feb 8 23:50:34.997675 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 8 23:50:34.997682 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Feb 8 23:50:34.997689 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Feb 8 23:50:34.997696 kernel: No NUMA configuration found Feb 8 23:50:34.997703 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Feb 8 23:50:34.997711 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Feb 8 23:50:34.997718 kernel: Zone ranges: Feb 8 23:50:34.997730 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:50:34.997738 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Feb 8 23:50:34.997745 kernel: Normal empty Feb 8 23:50:34.997752 kernel: Movable zone start for each node Feb 8 23:50:34.997760 kernel: Early memory node ranges Feb 8 23:50:34.997767 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 8 23:50:34.997777 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Feb 8 23:50:34.997784 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Feb 8 23:50:34.997792 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:50:34.997799 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 8 23:50:34.997807 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Feb 8 23:50:34.997814 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 8 23:50:34.997822 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 8 23:50:34.997829 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:50:34.997837 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 8 23:50:34.997846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 8 23:50:34.997854 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:50:34.997861 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 8 23:50:34.997869 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 8 23:50:34.997876 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:50:34.997884 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 8 23:50:34.997891 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 8 23:50:34.997899 kernel: Booting paravirtualized kernel on KVM Feb 8 23:50:34.997907 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:50:34.997915 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 8 23:50:34.997925 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 8 23:50:34.997932 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 8 23:50:34.997940 kernel: pcpu-alloc: [0] 0 1 Feb 8 23:50:34.997947 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 8 23:50:34.997955 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 8 23:50:34.997962 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Feb 8 23:50:34.997970 kernel: Policy zone: DMA32 Feb 8 23:50:34.997978 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:50:34.997989 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:50:34.997997 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 8 23:50:34.998005 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 8 23:50:34.998013 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:50:34.998021 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 8 23:50:34.998029 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 8 23:50:34.998036 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:50:34.998044 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:50:34.998053 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:50:34.998061 kernel: rcu: RCU event tracing is enabled. Feb 8 23:50:34.998069 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 8 23:50:34.998077 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:50:34.998084 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:50:34.998092 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:50:34.998100 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 8 23:50:34.998108 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 8 23:50:34.998115 kernel: Console: colour VGA+ 80x25 Feb 8 23:50:34.998125 kernel: printk: console [tty0] enabled Feb 8 23:50:34.998133 kernel: printk: console [ttyS0] enabled Feb 8 23:50:34.998141 kernel: ACPI: Core revision 20210730 Feb 8 23:50:34.998148 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:50:34.998156 kernel: x2apic enabled Feb 8 23:50:34.998163 kernel: Switched APIC routing to physical x2apic. Feb 8 23:50:34.998171 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 8 23:50:34.998178 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 8 23:50:34.998186 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Feb 8 23:50:34.998194 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 8 23:50:34.998203 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 8 23:50:34.998211 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:50:34.998218 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:50:34.998226 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:50:34.998234 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:50:34.998241 kernel: Speculative Store Bypass: Vulnerable Feb 8 23:50:34.998249 kernel: x86/fpu: x87 FPU will use FXSAVE Feb 8 23:50:34.998256 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:50:34.998264 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:50:34.998275 kernel: LSM: Security Framework initializing Feb 8 23:50:34.998282 kernel: SELinux: Initializing. Feb 8 23:50:34.998290 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 8 23:50:34.998337 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 8 23:50:34.998345 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Feb 8 23:50:34.998353 kernel: Performance Events: AMD PMU driver. Feb 8 23:50:34.998360 kernel: ... version: 0 Feb 8 23:50:34.998368 kernel: ... bit width: 48 Feb 8 23:50:34.998375 kernel: ... generic registers: 4 Feb 8 23:50:34.998390 kernel: ... value mask: 0000ffffffffffff Feb 8 23:50:34.998398 kernel: ... max period: 00007fffffffffff Feb 8 23:50:34.998407 kernel: ... fixed-purpose events: 0 Feb 8 23:50:34.998415 kernel: ... event mask: 000000000000000f Feb 8 23:50:34.998423 kernel: signal: max sigframe size: 1440 Feb 8 23:50:34.998430 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:50:34.998438 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:50:34.998446 kernel: x86: Booting SMP configuration: Feb 8 23:50:34.998455 kernel: .... node #0, CPUs: #1 Feb 8 23:50:34.998463 kernel: kvm-clock: cpu 1, msr 62faa041, secondary cpu clock Feb 8 23:50:34.998471 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 8 23:50:34.998479 kernel: smp: Brought up 1 node, 2 CPUs Feb 8 23:50:34.998487 kernel: smpboot: Max logical packages: 2 Feb 8 23:50:34.998495 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Feb 8 23:50:34.998503 kernel: devtmpfs: initialized Feb 8 23:50:34.998510 kernel: x86/mm: Memory block size: 128MB Feb 8 23:50:34.998519 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:50:34.998528 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 8 23:50:34.998536 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:50:34.998544 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:50:34.998552 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:50:34.998560 kernel: audit: type=2000 audit(1707436234.050:1): state=initialized audit_enabled=0 res=1 Feb 8 23:50:34.998568 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:50:34.998576 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:50:34.998584 kernel: cpuidle: using governor menu Feb 8 23:50:34.998591 kernel: ACPI: bus type PCI registered Feb 8 23:50:34.998601 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:50:34.998609 kernel: dca service started, version 1.12.1 Feb 8 23:50:34.998616 kernel: PCI: Using configuration type 1 for base access Feb 8 23:50:34.998625 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:50:34.998633 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:50:34.998641 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:50:34.998648 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:50:34.998656 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:50:34.998664 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:50:34.998674 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:50:34.998681 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:50:34.998689 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:50:34.998697 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:50:34.998705 kernel: ACPI: Interpreter enabled Feb 8 23:50:34.998713 kernel: ACPI: PM: (supports S0 S3 S5) Feb 8 23:50:34.998721 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:50:34.998729 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:50:34.998736 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 8 23:50:34.998746 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 8 23:50:34.998901 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 8 23:50:34.998995 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 8 23:50:34.999008 kernel: acpiphp: Slot [3] registered Feb 8 23:50:34.999016 kernel: acpiphp: Slot [4] registered Feb 8 23:50:34.999024 kernel: acpiphp: Slot [5] registered Feb 8 23:50:34.999032 kernel: acpiphp: Slot [6] registered Feb 8 23:50:34.999043 kernel: acpiphp: Slot [7] registered Feb 8 23:50:34.999050 kernel: acpiphp: Slot [8] registered Feb 8 23:50:34.999058 kernel: acpiphp: Slot [9] registered Feb 8 23:50:34.999066 kernel: acpiphp: Slot [10] registered Feb 8 23:50:34.999074 kernel: acpiphp: Slot [11] registered Feb 8 23:50:34.999081 kernel: acpiphp: Slot [12] registered Feb 8 23:50:34.999089 kernel: acpiphp: Slot [13] registered Feb 8 23:50:34.999097 kernel: acpiphp: Slot [14] registered Feb 8 23:50:34.999105 kernel: acpiphp: Slot [15] registered Feb 8 23:50:34.999112 kernel: acpiphp: Slot [16] registered Feb 8 23:50:34.999122 kernel: acpiphp: Slot [17] registered Feb 8 23:50:34.999130 kernel: acpiphp: Slot [18] registered Feb 8 23:50:34.999137 kernel: acpiphp: Slot [19] registered Feb 8 23:50:34.999145 kernel: acpiphp: Slot [20] registered Feb 8 23:50:34.999153 kernel: acpiphp: Slot [21] registered Feb 8 23:50:34.999160 kernel: acpiphp: Slot [22] registered Feb 8 23:50:34.999168 kernel: acpiphp: Slot [23] registered Feb 8 23:50:34.999176 kernel: acpiphp: Slot [24] registered Feb 8 23:50:34.999184 kernel: acpiphp: Slot [25] registered Feb 8 23:50:34.999193 kernel: acpiphp: Slot [26] registered Feb 8 23:50:34.999201 kernel: acpiphp: Slot [27] registered Feb 8 23:50:34.999209 kernel: acpiphp: Slot [28] registered Feb 8 23:50:34.999217 kernel: acpiphp: Slot [29] registered Feb 8 23:50:34.999224 kernel: acpiphp: Slot [30] registered Feb 8 23:50:34.999232 kernel: acpiphp: Slot [31] registered Feb 8 23:50:34.999240 kernel: PCI host bridge to bus 0000:00 Feb 8 23:50:34.999374 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 8 23:50:34.999453 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 8 23:50:34.999531 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 8 23:50:34.999603 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 8 23:50:34.999685 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 8 23:50:34.999765 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 8 23:50:34.999873 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 8 23:50:34.999977 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 8 23:50:35.000083 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 8 23:50:35.000168 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Feb 8 23:50:35.000252 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 8 23:50:35.000359 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 8 23:50:35.000442 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 8 23:50:35.000522 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 8 23:50:35.000612 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 8 23:50:35.000711 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 8 23:50:35.000794 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 8 23:50:35.000883 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 8 23:50:35.000967 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 8 23:50:35.001053 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 8 23:50:35.001138 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 8 23:50:35.001226 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 8 23:50:35.005343 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 8 23:50:35.005471 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 8 23:50:35.005556 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 8 23:50:35.005642 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 8 23:50:35.005733 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 8 23:50:35.005818 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 8 23:50:35.005913 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 8 23:50:35.005997 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 8 23:50:35.006080 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 8 23:50:35.006162 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 8 23:50:35.006251 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 8 23:50:35.006369 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 8 23:50:35.006455 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 8 23:50:35.006551 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 8 23:50:35.006636 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Feb 8 23:50:35.006728 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 8 23:50:35.006740 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 8 23:50:35.006749 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 8 23:50:35.006757 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 8 23:50:35.006765 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 8 23:50:35.006773 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 8 23:50:35.006786 kernel: iommu: Default domain type: Translated Feb 8 23:50:35.006794 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:50:35.006878 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 8 23:50:35.006961 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 8 23:50:35.007044 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 8 23:50:35.007056 kernel: vgaarb: loaded Feb 8 23:50:35.007064 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:50:35.007072 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:50:35.007081 kernel: PTP clock support registered Feb 8 23:50:35.007094 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:50:35.007102 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 8 23:50:35.007110 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 8 23:50:35.007119 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Feb 8 23:50:35.007127 kernel: clocksource: Switched to clocksource kvm-clock Feb 8 23:50:35.007134 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:50:35.007142 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:50:35.007150 kernel: pnp: PnP ACPI init Feb 8 23:50:35.007235 kernel: pnp 00:03: [dma 2] Feb 8 23:50:35.007251 kernel: pnp: PnP ACPI: found 5 devices Feb 8 23:50:35.007259 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:50:35.007267 kernel: NET: Registered PF_INET protocol family Feb 8 23:50:35.007276 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 8 23:50:35.007284 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 8 23:50:35.007292 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:50:35.007351 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:50:35.007359 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 8 23:50:35.007370 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 8 23:50:35.007379 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 8 23:50:35.007387 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 8 23:50:35.007395 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:50:35.007403 kernel: NET: Registered PF_XDP protocol family Feb 8 23:50:35.007480 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 8 23:50:35.007551 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 8 23:50:35.007622 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 8 23:50:35.007692 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 8 23:50:35.007765 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 8 23:50:35.007846 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 8 23:50:35.007937 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 8 23:50:35.008019 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 8 23:50:35.008031 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:50:35.008039 kernel: Initialise system trusted keyrings Feb 8 23:50:35.008047 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 8 23:50:35.008058 kernel: Key type asymmetric registered Feb 8 23:50:35.008066 kernel: Asymmetric key parser 'x509' registered Feb 8 23:50:35.008074 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:50:35.008082 kernel: io scheduler mq-deadline registered Feb 8 23:50:35.008090 kernel: io scheduler kyber registered Feb 8 23:50:35.008098 kernel: io scheduler bfq registered Feb 8 23:50:35.008106 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:50:35.008115 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 8 23:50:35.008123 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 8 23:50:35.008131 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 8 23:50:35.008142 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 8 23:50:35.008150 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:50:35.008158 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:50:35.008166 kernel: random: crng init done Feb 8 23:50:35.008174 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 8 23:50:35.008182 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 8 23:50:35.008190 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 8 23:50:35.008198 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 8 23:50:35.008286 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 8 23:50:35.008383 kernel: rtc_cmos 00:04: registered as rtc0 Feb 8 23:50:35.008457 kernel: rtc_cmos 00:04: setting system clock to 2024-02-08T23:50:34 UTC (1707436234) Feb 8 23:50:35.008529 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 8 23:50:35.008540 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:50:35.008549 kernel: Segment Routing with IPv6 Feb 8 23:50:35.008556 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:50:35.008564 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:50:35.008572 kernel: Key type dns_resolver registered Feb 8 23:50:35.008584 kernel: IPI shorthand broadcast: enabled Feb 8 23:50:35.008592 kernel: sched_clock: Marking stable (709708408, 118854634)->(846740983, -18177941) Feb 8 23:50:35.008600 kernel: registered taskstats version 1 Feb 8 23:50:35.008608 kernel: Loading compiled-in X.509 certificates Feb 8 23:50:35.008616 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:50:35.008624 kernel: Key type .fscrypt registered Feb 8 23:50:35.008633 kernel: Key type fscrypt-provisioning registered Feb 8 23:50:35.008641 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:50:35.008651 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:50:35.008659 kernel: ima: No architecture policies found Feb 8 23:50:35.008667 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:50:35.008675 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:50:35.008683 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:50:35.008691 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:50:35.008699 kernel: Run /init as init process Feb 8 23:50:35.008708 kernel: with arguments: Feb 8 23:50:35.008716 kernel: /init Feb 8 23:50:35.008725 kernel: with environment: Feb 8 23:50:35.008732 kernel: HOME=/ Feb 8 23:50:35.008740 kernel: TERM=linux Feb 8 23:50:35.008748 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:50:35.008758 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:50:35.008769 systemd[1]: Detected virtualization kvm. Feb 8 23:50:35.008778 systemd[1]: Detected architecture x86-64. Feb 8 23:50:35.008786 systemd[1]: Running in initrd. Feb 8 23:50:35.008797 systemd[1]: No hostname configured, using default hostname. Feb 8 23:50:35.008805 systemd[1]: Hostname set to . Feb 8 23:50:35.008814 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:50:35.008823 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:50:35.008832 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:50:35.008840 systemd[1]: Reached target cryptsetup.target. Feb 8 23:50:35.008848 systemd[1]: Reached target paths.target. Feb 8 23:50:35.008857 systemd[1]: Reached target slices.target. Feb 8 23:50:35.008869 systemd[1]: Reached target swap.target. Feb 8 23:50:35.008877 systemd[1]: Reached target timers.target. Feb 8 23:50:35.008886 systemd[1]: Listening on iscsid.socket. Feb 8 23:50:35.008895 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:50:35.008903 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:50:35.008912 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:50:35.008920 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:50:35.008931 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:50:35.008939 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:50:35.008948 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:50:35.008957 systemd[1]: Reached target sockets.target. Feb 8 23:50:35.008965 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:50:35.008986 systemd[1]: Finished network-cleanup.service. Feb 8 23:50:35.008997 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:50:35.009007 systemd[1]: Starting systemd-journald.service... Feb 8 23:50:35.009016 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:50:35.009025 systemd[1]: Starting systemd-resolved.service... Feb 8 23:50:35.009033 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:50:35.009042 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:50:35.009051 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:50:35.009063 systemd-journald[184]: Journal started Feb 8 23:50:35.009107 systemd-journald[184]: Runtime Journal (/run/log/journal/6b37762427fd4a67bccf32090bdb3864) is 4.9M, max 39.5M, 34.5M free. Feb 8 23:50:34.972700 systemd-modules-load[185]: Inserted module 'overlay' Feb 8 23:50:35.034412 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:50:35.034440 kernel: Bridge firewalling registered Feb 8 23:50:35.034467 systemd[1]: Started systemd-journald.service. Feb 8 23:50:35.034482 kernel: audit: type=1130 audit(1707436235.029:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.021636 systemd-resolved[186]: Positive Trust Anchors: Feb 8 23:50:35.042611 kernel: audit: type=1130 audit(1707436235.034:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.042629 kernel: audit: type=1130 audit(1707436235.037:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.021647 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:50:35.021682 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:50:35.024367 systemd-resolved[186]: Defaulting to hostname 'linux'. Feb 8 23:50:35.055760 kernel: audit: type=1130 audit(1707436235.051:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.025659 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 8 23:50:35.064105 kernel: audit: type=1130 audit(1707436235.055:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.064126 kernel: SCSI subsystem initialized Feb 8 23:50:35.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.034899 systemd[1]: Started systemd-resolved.service. Feb 8 23:50:35.038414 systemd[1]: Reached target nss-lookup.target. Feb 8 23:50:35.042907 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:50:35.044846 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:50:35.052144 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:50:35.060066 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:50:35.076682 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:50:35.084020 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:50:35.084050 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:50:35.084062 kernel: audit: type=1130 audit(1707436235.077:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.084074 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:50:35.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.083003 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:50:35.093192 dracut-cmdline[202]: dracut-dracut-053 Feb 8 23:50:35.093472 systemd-modules-load[185]: Inserted module 'dm_multipath' Feb 8 23:50:35.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.098830 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:50:35.104066 kernel: audit: type=1130 audit(1707436235.094:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.094588 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:50:35.095868 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:50:35.105488 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:50:35.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.110328 kernel: audit: type=1130 audit(1707436235.106:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.165341 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:50:35.178347 kernel: iscsi: registered transport (tcp) Feb 8 23:50:35.202672 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:50:35.202761 kernel: QLogic iSCSI HBA Driver Feb 8 23:50:35.253895 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:50:35.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.257068 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:50:35.260190 kernel: audit: type=1130 audit(1707436235.254:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.351462 kernel: raid6: sse2x4 gen() 6202 MB/s Feb 8 23:50:35.368392 kernel: raid6: sse2x4 xor() 7267 MB/s Feb 8 23:50:35.385393 kernel: raid6: sse2x2 gen() 14737 MB/s Feb 8 23:50:35.402393 kernel: raid6: sse2x2 xor() 8760 MB/s Feb 8 23:50:35.419392 kernel: raid6: sse2x1 gen() 11346 MB/s Feb 8 23:50:35.437081 kernel: raid6: sse2x1 xor() 7017 MB/s Feb 8 23:50:35.437153 kernel: raid6: using algorithm sse2x2 gen() 14737 MB/s Feb 8 23:50:35.437181 kernel: raid6: .... xor() 8760 MB/s, rmw enabled Feb 8 23:50:35.437893 kernel: raid6: using ssse3x2 recovery algorithm Feb 8 23:50:35.452347 kernel: xor: measuring software checksum speed Feb 8 23:50:35.454762 kernel: prefetch64-sse : 18404 MB/sec Feb 8 23:50:35.454804 kernel: generic_sse : 16750 MB/sec Feb 8 23:50:35.454831 kernel: xor: using function: prefetch64-sse (18404 MB/sec) Feb 8 23:50:35.567881 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:50:35.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.583784 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:50:35.585000 audit: BPF prog-id=7 op=LOAD Feb 8 23:50:35.586000 audit: BPF prog-id=8 op=LOAD Feb 8 23:50:35.588424 systemd[1]: Starting systemd-udevd.service... Feb 8 23:50:35.601651 systemd-udevd[385]: Using default interface naming scheme 'v252'. Feb 8 23:50:35.606400 systemd[1]: Started systemd-udevd.service. Feb 8 23:50:35.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.612078 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:50:35.638859 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Feb 8 23:50:35.683324 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:50:35.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.684652 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:50:35.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:35.739144 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:50:35.813402 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Feb 8 23:50:35.820823 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 8 23:50:35.820848 kernel: GPT:17805311 != 41943039 Feb 8 23:50:35.820860 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 8 23:50:35.821912 kernel: GPT:17805311 != 41943039 Feb 8 23:50:35.822590 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 8 23:50:35.824402 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:50:35.860958 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (436) Feb 8 23:50:35.874318 kernel: libata version 3.00 loaded. Feb 8 23:50:35.880937 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:50:35.916393 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 8 23:50:35.916555 kernel: scsi host0: ata_piix Feb 8 23:50:35.916685 kernel: scsi host1: ata_piix Feb 8 23:50:35.916808 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Feb 8 23:50:35.916821 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Feb 8 23:50:35.920508 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:50:35.924552 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:50:35.927670 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:50:35.928204 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:50:35.930167 systemd[1]: Starting disk-uuid.service... Feb 8 23:50:35.941144 disk-uuid[461]: Primary Header is updated. Feb 8 23:50:35.941144 disk-uuid[461]: Secondary Entries is updated. Feb 8 23:50:35.941144 disk-uuid[461]: Secondary Header is updated. Feb 8 23:50:35.950385 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:50:35.953329 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:50:36.968338 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:50:36.968511 disk-uuid[462]: The operation has completed successfully. Feb 8 23:50:37.033635 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:50:37.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:37.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:37.033862 systemd[1]: Finished disk-uuid.service. Feb 8 23:50:37.056140 systemd[1]: Starting verity-setup.service... Feb 8 23:50:37.077367 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Feb 8 23:50:37.176733 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:50:37.180905 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:50:37.186093 systemd[1]: Finished verity-setup.service. Feb 8 23:50:37.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:37.317359 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:50:37.317897 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:50:37.318537 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:50:37.319480 systemd[1]: Starting ignition-setup.service... Feb 8 23:50:37.320608 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:50:37.340979 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:50:37.341036 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:50:37.341048 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:50:37.356967 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:50:37.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:37.370241 systemd[1]: Finished ignition-setup.service. Feb 8 23:50:37.371668 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:50:37.473670 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:50:37.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:37.475000 audit: BPF prog-id=9 op=LOAD Feb 8 23:50:37.476558 systemd[1]: Starting systemd-networkd.service... Feb 8 23:50:37.518536 systemd-networkd[632]: lo: Link UP Feb 8 23:50:37.518556 systemd-networkd[632]: lo: Gained carrier Feb 8 23:50:37.519438 systemd-networkd[632]: Enumeration completed Feb 8 23:50:37.519879 systemd-networkd[632]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:50:37.521606 systemd[1]: Started systemd-networkd.service. Feb 8 23:50:37.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:37.521696 systemd-networkd[632]: eth0: Link UP Feb 8 23:50:37.521700 systemd-networkd[632]: eth0: Gained carrier Feb 8 23:50:37.524092 systemd[1]: Reached target network.target. Feb 8 23:50:37.525950 systemd[1]: Starting iscsiuio.service... Feb 8 23:50:37.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:37.533594 systemd[1]: Started iscsiuio.service. Feb 8 23:50:37.534876 systemd[1]: Starting iscsid.service... Feb 8 23:50:37.538603 iscsid[641]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:50:37.538603 iscsid[641]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:50:37.538603 iscsid[641]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:50:37.538603 iscsid[641]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:50:37.538603 iscsid[641]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:50:37.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:37.546422 iscsid[641]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:50:37.540900 systemd-networkd[632]: eth0: DHCPv4 address 172.24.4.64/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 8 23:50:37.541471 systemd[1]: Started iscsid.service. Feb 8 23:50:37.543591 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:50:37.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:37.557158 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:50:37.557739 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:50:37.558152 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:50:37.558629 systemd[1]: Reached target remote-fs.target. Feb 8 23:50:37.559860 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:50:37.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:37.570120 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:50:37.668948 ignition[554]: Ignition 2.14.0 Feb 8 23:50:37.668977 ignition[554]: Stage: fetch-offline Feb 8 23:50:37.669088 ignition[554]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:50:37.669132 ignition[554]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:50:37.671635 ignition[554]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:50:37.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:37.674492 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:50:37.671846 ignition[554]: parsed url from cmdline: "" Feb 8 23:50:37.676746 systemd-resolved[186]: Detected conflict on linux IN A 172.24.4.64 Feb 8 23:50:37.671857 ignition[554]: no config URL provided Feb 8 23:50:37.676762 systemd-resolved[186]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Feb 8 23:50:37.671876 ignition[554]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:50:37.676984 systemd[1]: Starting ignition-fetch.service... Feb 8 23:50:37.671902 ignition[554]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:50:37.671970 ignition[554]: failed to fetch config: resource requires networking Feb 8 23:50:37.672478 ignition[554]: Ignition finished successfully Feb 8 23:50:37.696126 ignition[655]: Ignition 2.14.0 Feb 8 23:50:37.696164 ignition[655]: Stage: fetch Feb 8 23:50:37.696530 ignition[655]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:50:37.696582 ignition[655]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:50:37.699557 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:50:37.699820 ignition[655]: parsed url from cmdline: "" Feb 8 23:50:37.699828 ignition[655]: no config URL provided Feb 8 23:50:37.699839 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:50:37.699887 ignition[655]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:50:37.702962 ignition[655]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 8 23:50:37.703018 ignition[655]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 8 23:50:37.707386 ignition[655]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 8 23:50:38.041928 ignition[655]: GET result: OK Feb 8 23:50:38.042837 ignition[655]: parsing config with SHA512: f0ca96e5cd8f73e6e41f434496a22aaa0868c606d03a1499e883d2ea1432f8f49733e9faedd1744b92beb82ba2186908ff8a6397833dc823b782fa07cf167a67 Feb 8 23:50:38.157875 unknown[655]: fetched base config from "system" Feb 8 23:50:38.158998 unknown[655]: fetched base config from "system" Feb 8 23:50:38.160093 unknown[655]: fetched user config from "openstack" Feb 8 23:50:38.161912 ignition[655]: fetch: fetch complete Feb 8 23:50:38.161969 ignition[655]: fetch: fetch passed Feb 8 23:50:38.162111 ignition[655]: Ignition finished successfully Feb 8 23:50:38.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:38.164708 systemd[1]: Finished ignition-fetch.service. Feb 8 23:50:38.165990 systemd[1]: Starting ignition-kargs.service... Feb 8 23:50:38.175239 ignition[661]: Ignition 2.14.0 Feb 8 23:50:38.175252 ignition[661]: Stage: kargs Feb 8 23:50:38.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:38.186516 systemd[1]: Finished ignition-kargs.service. Feb 8 23:50:38.175411 ignition[661]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:50:38.175433 ignition[661]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:50:38.189731 systemd[1]: Starting ignition-disks.service... Feb 8 23:50:38.176368 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:50:38.177772 ignition[661]: kargs: kargs passed Feb 8 23:50:38.177813 ignition[661]: Ignition finished successfully Feb 8 23:50:38.210380 ignition[667]: Ignition 2.14.0 Feb 8 23:50:38.210412 ignition[667]: Stage: disks Feb 8 23:50:38.210738 ignition[667]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:50:38.210783 ignition[667]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:50:38.213132 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:50:38.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:38.219265 systemd[1]: Finished ignition-disks.service. Feb 8 23:50:38.218230 ignition[667]: disks: disks passed Feb 8 23:50:38.219799 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:50:38.218439 ignition[667]: Ignition finished successfully Feb 8 23:50:38.220264 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:50:38.220795 systemd[1]: Reached target local-fs.target. Feb 8 23:50:38.222201 systemd[1]: Reached target sysinit.target. Feb 8 23:50:38.223511 systemd[1]: Reached target basic.target. Feb 8 23:50:38.225833 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:50:38.503879 systemd-fsck[675]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 8 23:50:38.658089 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:50:38.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:38.662338 systemd[1]: Mounting sysroot.mount... Feb 8 23:50:38.681419 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:50:38.682688 systemd[1]: Mounted sysroot.mount. Feb 8 23:50:38.684036 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:50:38.689292 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:50:38.691206 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 8 23:50:38.692747 systemd[1]: Starting flatcar-openstack-hostname.service... Feb 8 23:50:38.700601 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:50:38.700752 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:50:38.709398 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:50:38.718421 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:50:38.722152 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:50:38.747901 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (682) Feb 8 23:50:38.763359 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:50:38.763483 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:50:38.763513 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:50:38.763554 initrd-setup-root[687]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:50:38.777103 initrd-setup-root[702]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:50:38.783550 initrd-setup-root[719]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:50:38.789237 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:50:38.792283 initrd-setup-root[729]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:50:38.854858 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:50:38.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:38.856381 systemd[1]: Starting ignition-mount.service... Feb 8 23:50:38.857464 systemd[1]: Starting sysroot-boot.service... Feb 8 23:50:38.866266 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 8 23:50:38.866457 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 8 23:50:38.889981 ignition[750]: INFO : Ignition 2.14.0 Feb 8 23:50:38.890829 ignition[750]: INFO : Stage: mount Feb 8 23:50:38.891470 ignition[750]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:50:38.892204 ignition[750]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:50:38.894292 ignition[750]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:50:38.897037 ignition[750]: INFO : mount: mount passed Feb 8 23:50:38.897623 ignition[750]: INFO : Ignition finished successfully Feb 8 23:50:38.904115 systemd[1]: Finished ignition-mount.service. Feb 8 23:50:38.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:38.906198 coreos-metadata[681]: Feb 08 23:50:38.906 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 8 23:50:38.911950 systemd[1]: Finished sysroot-boot.service. Feb 8 23:50:38.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:38.924772 coreos-metadata[681]: Feb 08 23:50:38.924 INFO Fetch successful Feb 8 23:50:38.925489 coreos-metadata[681]: Feb 08 23:50:38.925 INFO wrote hostname ci-3510-3-2-a-bd3a159777.novalocal to /sysroot/etc/hostname Feb 8 23:50:38.929713 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 8 23:50:38.929837 systemd[1]: Finished flatcar-openstack-hostname.service. Feb 8 23:50:38.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:38.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:38.932677 systemd[1]: Starting ignition-files.service... Feb 8 23:50:38.943147 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:50:38.952344 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (761) Feb 8 23:50:38.955889 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:50:38.955975 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:50:38.956003 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:50:38.966670 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:50:38.979062 ignition[780]: INFO : Ignition 2.14.0 Feb 8 23:50:38.979062 ignition[780]: INFO : Stage: files Feb 8 23:50:38.981705 ignition[780]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:50:38.981705 ignition[780]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:50:38.981705 ignition[780]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:50:38.988621 ignition[780]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:50:38.991267 ignition[780]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:50:38.991267 ignition[780]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:50:38.995699 ignition[780]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:50:38.998128 ignition[780]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:50:39.001047 unknown[780]: wrote ssh authorized keys file for user: core Feb 8 23:50:39.005222 ignition[780]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:50:39.005222 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:50:39.005222 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 8 23:50:39.122606 systemd-networkd[632]: eth0: Gained IPv6LL Feb 8 23:50:39.679609 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:50:40.029406 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:50:40.030786 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:50:40.031798 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 8 23:50:40.429512 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:50:41.154329 ignition[780]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 8 23:50:41.155996 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:50:41.157007 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:50:41.158100 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:50:41.506874 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:50:41.959847 ignition[780]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 8 23:50:41.972350 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:50:41.972350 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 8 23:50:41.972350 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 8 23:50:41.972350 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:50:41.972350 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:50:42.106078 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 8 23:50:42.986725 ignition[780]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 8 23:50:42.988423 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:50:42.989262 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:50:42.990128 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 8 23:50:43.099362 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 8 23:50:43.970426 ignition[780]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 8 23:50:43.970426 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:50:43.975775 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:50:43.975775 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:50:44.079160 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 8 23:50:46.300059 ignition[780]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 8 23:50:46.300059 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:50:46.300059 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:50:46.307963 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:50:46.307963 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:50:46.307963 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:50:46.307963 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:50:46.307963 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:50:46.307963 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:50:46.307963 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:50:46.307963 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:50:46.307963 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:50:46.307963 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:50:46.307963 ignition[780]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:50:46.307963 ignition[780]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 8 23:50:46.307963 ignition[780]: INFO : files: op(10): op(11): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 8 23:50:46.307963 ignition[780]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 8 23:50:46.307963 ignition[780]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 8 23:50:46.307963 ignition[780]: INFO : files: op(12): [started] processing unit "containerd.service" Feb 8 23:50:46.356030 kernel: kauditd_printk_skb: 27 callbacks suppressed Feb 8 23:50:46.356061 kernel: audit: type=1130 audit(1707436246.323:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.356074 kernel: audit: type=1130 audit(1707436246.341:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.356085 kernel: audit: type=1131 audit(1707436246.341:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.356097 kernel: audit: type=1130 audit(1707436246.341:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.356234 ignition[780]: INFO : files: op(12): op(13): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 8 23:50:46.356234 ignition[780]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 8 23:50:46.356234 ignition[780]: INFO : files: op(12): [finished] processing unit "containerd.service" Feb 8 23:50:46.356234 ignition[780]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:50:46.356234 ignition[780]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:50:46.356234 ignition[780]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:50:46.356234 ignition[780]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:50:46.356234 ignition[780]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 8 23:50:46.356234 ignition[780]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:50:46.356234 ignition[780]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:50:46.356234 ignition[780]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 8 23:50:46.356234 ignition[780]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Feb 8 23:50:46.356234 ignition[780]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:50:46.356234 ignition[780]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:50:46.356234 ignition[780]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Feb 8 23:50:46.356234 ignition[780]: INFO : files: op(1a): [started] processing unit "coreos-metadata.service" Feb 8 23:50:46.356234 ignition[780]: INFO : files: op(1a): op(1b): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 8 23:50:46.319681 systemd[1]: Finished ignition-files.service. Feb 8 23:50:46.371427 ignition[780]: INFO : files: op(1a): op(1b): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 8 23:50:46.371427 ignition[780]: INFO : files: op(1a): [finished] processing unit "coreos-metadata.service" Feb 8 23:50:46.371427 ignition[780]: INFO : files: op(1c): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 8 23:50:46.371427 ignition[780]: INFO : files: op(1c): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 8 23:50:46.371427 ignition[780]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:50:46.371427 ignition[780]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:50:46.371427 ignition[780]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:50:46.371427 ignition[780]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:50:46.371427 ignition[780]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-helm.service" Feb 8 23:50:46.371427 ignition[780]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-helm.service" Feb 8 23:50:46.371427 ignition[780]: INFO : files: createResultFile: createFiles: op(20): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:50:46.371427 ignition[780]: INFO : files: createResultFile: createFiles: op(20): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:50:46.371427 ignition[780]: INFO : files: files passed Feb 8 23:50:46.371427 ignition[780]: INFO : Ignition finished successfully Feb 8 23:50:46.398051 kernel: audit: type=1130 audit(1707436246.377:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.398070 kernel: audit: type=1131 audit(1707436246.377:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.328354 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:50:46.398678 initrd-setup-root-after-ignition[805]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:50:46.332768 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:50:46.333644 systemd[1]: Starting ignition-quench.service... Feb 8 23:50:46.340877 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:50:46.340967 systemd[1]: Finished ignition-quench.service. Feb 8 23:50:46.341832 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:50:46.342374 systemd[1]: Reached target ignition-complete.target. Feb 8 23:50:46.346661 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:50:46.376582 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:50:46.376798 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:50:46.378360 systemd[1]: Reached target initrd-fs.target. Feb 8 23:50:46.386267 systemd[1]: Reached target initrd.target. Feb 8 23:50:46.388079 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:50:46.389684 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:50:46.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.414618 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:50:46.423439 kernel: audit: type=1130 audit(1707436246.414:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.423585 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:50:46.435505 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:50:46.436531 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:50:46.437581 systemd[1]: Stopped target timers.target. Feb 8 23:50:46.438547 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:50:46.439190 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:50:46.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.440367 systemd[1]: Stopped target initrd.target. Feb 8 23:50:46.444325 kernel: audit: type=1131 audit(1707436246.439:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.444417 systemd[1]: Stopped target basic.target. Feb 8 23:50:46.445395 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:50:46.446423 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:50:46.447455 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:50:46.448499 systemd[1]: Stopped target remote-fs.target. Feb 8 23:50:46.449492 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:50:46.450516 systemd[1]: Stopped target sysinit.target. Feb 8 23:50:46.451497 systemd[1]: Stopped target local-fs.target. Feb 8 23:50:46.452491 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:50:46.453619 systemd[1]: Stopped target swap.target. Feb 8 23:50:46.454594 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:50:46.455238 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:50:46.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.456423 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:50:46.460318 kernel: audit: type=1131 audit(1707436246.455:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.460426 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:50:46.461072 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:50:46.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.462199 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:50:46.465902 kernel: audit: type=1131 audit(1707436246.461:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.462361 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:50:46.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.466601 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:50:46.466725 systemd[1]: Stopped ignition-files.service. Feb 8 23:50:46.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.468290 systemd[1]: Stopping ignition-mount.service... Feb 8 23:50:46.469499 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:50:46.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.469940 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:50:46.470098 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:50:46.470760 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:50:46.470912 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:50:46.479182 ignition[819]: INFO : Ignition 2.14.0 Feb 8 23:50:46.479182 ignition[819]: INFO : Stage: umount Feb 8 23:50:46.479182 ignition[819]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:50:46.479182 ignition[819]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:50:46.483688 ignition[819]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:50:46.483688 ignition[819]: INFO : umount: umount passed Feb 8 23:50:46.483688 ignition[819]: INFO : Ignition finished successfully Feb 8 23:50:46.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.490866 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:50:46.492481 systemd[1]: Stopped ignition-mount.service. Feb 8 23:50:46.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.497916 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:50:46.500774 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:50:46.501493 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:50:46.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.503420 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:50:46.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.503526 systemd[1]: Stopped ignition-disks.service. Feb 8 23:50:46.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.505249 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:50:46.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.505286 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:50:46.506140 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 8 23:50:46.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.506178 systemd[1]: Stopped ignition-fetch.service. Feb 8 23:50:46.507039 systemd[1]: Stopped target network.target. Feb 8 23:50:46.507958 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:50:46.508000 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:50:46.508865 systemd[1]: Stopped target paths.target. Feb 8 23:50:46.509685 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:50:46.513331 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:50:46.513812 systemd[1]: Stopped target slices.target. Feb 8 23:50:46.514716 systemd[1]: Stopped target sockets.target. Feb 8 23:50:46.515605 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:50:46.515631 systemd[1]: Closed iscsid.socket. Feb 8 23:50:46.516458 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:50:46.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.516483 systemd[1]: Closed iscsiuio.socket. Feb 8 23:50:46.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.517274 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:50:46.517331 systemd[1]: Stopped ignition-setup.service. Feb 8 23:50:46.518103 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:50:46.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.518139 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:50:46.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.519050 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:50:46.520151 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:50:46.521353 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:50:46.521430 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:50:46.522620 systemd-networkd[632]: eth0: DHCPv6 lease lost Feb 8 23:50:46.529000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:50:46.523523 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:50:46.523614 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:50:46.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.525444 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:50:46.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.525487 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:50:46.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.528514 systemd[1]: Stopping network-cleanup.service... Feb 8 23:50:46.530375 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:50:46.530427 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:50:46.531352 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:50:46.531389 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:50:46.532417 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:50:46.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.532455 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:50:46.533419 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:50:46.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.535094 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 8 23:50:46.536383 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:50:46.544000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:50:46.536480 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:50:46.542069 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:50:46.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.542216 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:50:46.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.544201 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:50:46.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.544238 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:50:46.544840 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:50:46.544869 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:50:46.547071 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:50:46.547115 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:50:46.548071 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:50:46.548106 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:50:46.548996 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:50:46.549040 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:50:46.550508 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:50:46.557658 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 8 23:50:46.557706 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 8 23:50:46.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.559460 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:50:46.559502 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:50:46.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.561056 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:50:46.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.561099 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:50:46.563531 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 8 23:50:46.564131 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:50:46.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.564257 systemd[1]: Stopped network-cleanup.service. Feb 8 23:50:46.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:46.565081 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:50:46.565161 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:50:46.565986 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:50:46.567501 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:50:46.577000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:50:46.577000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:50:46.577000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:50:46.577358 systemd[1]: Switching root. Feb 8 23:50:46.581000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:50:46.581000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:50:46.603623 iscsid[641]: iscsid shutting down. Feb 8 23:50:46.604577 systemd-journald[184]: Received SIGTERM from PID 1 (n/a). Feb 8 23:50:46.604681 systemd-journald[184]: Journal stopped Feb 8 23:50:51.627068 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:50:51.627135 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:50:51.627149 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:50:51.627162 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:50:51.627173 kernel: SELinux: policy capability open_perms=1 Feb 8 23:50:51.627183 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:50:51.627194 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:50:51.627204 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:50:51.627214 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:50:51.627225 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:50:51.627239 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:50:51.627252 systemd[1]: Successfully loaded SELinux policy in 95.762ms. Feb 8 23:50:51.627281 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.112ms. Feb 8 23:50:51.627311 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:50:51.627325 systemd[1]: Detected virtualization kvm. Feb 8 23:50:51.627336 systemd[1]: Detected architecture x86-64. Feb 8 23:50:51.627347 systemd[1]: Detected first boot. Feb 8 23:50:51.627358 systemd[1]: Hostname set to . Feb 8 23:50:51.627373 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:50:51.627386 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:50:51.627397 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:50:51.627409 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:50:51.627422 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:50:51.627435 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:50:51.636904 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:50:51.636926 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 8 23:50:51.636944 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:50:51.636962 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:50:51.636974 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 8 23:50:51.636986 systemd[1]: Created slice system-getty.slice. Feb 8 23:50:51.636998 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:50:51.637011 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:50:51.637022 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:50:51.637035 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:50:51.637047 systemd[1]: Created slice user.slice. Feb 8 23:50:51.637061 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:50:51.637074 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:50:51.637086 systemd[1]: Set up automount boot.automount. Feb 8 23:50:51.637097 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:50:51.637110 systemd[1]: Reached target integritysetup.target. Feb 8 23:50:51.637123 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:50:51.637137 systemd[1]: Reached target remote-fs.target. Feb 8 23:50:51.637149 systemd[1]: Reached target slices.target. Feb 8 23:50:51.637162 systemd[1]: Reached target swap.target. Feb 8 23:50:51.637174 systemd[1]: Reached target torcx.target. Feb 8 23:50:51.637187 systemd[1]: Reached target veritysetup.target. Feb 8 23:50:51.637199 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:50:51.637211 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:50:51.637223 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:50:51.637236 kernel: kauditd_printk_skb: 47 callbacks suppressed Feb 8 23:50:51.637250 kernel: audit: type=1400 audit(1707436251.429:88): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:50:51.637263 kernel: audit: type=1335 audit(1707436251.429:89): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 8 23:50:51.637275 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:50:51.637287 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:50:51.637352 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:50:51.637368 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:50:51.637380 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:50:51.637393 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:50:51.637405 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:50:51.637421 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:50:51.637433 systemd[1]: Mounting media.mount... Feb 8 23:50:51.637446 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:50:51.637459 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:50:51.637471 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:50:51.637484 systemd[1]: Mounting tmp.mount... Feb 8 23:50:51.637496 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:50:51.637508 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:50:51.637521 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:50:51.637536 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:50:51.637548 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:50:51.637562 systemd[1]: Starting modprobe@drm.service... Feb 8 23:50:51.637574 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:50:51.637586 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:50:51.637599 systemd[1]: Starting modprobe@loop.service... Feb 8 23:50:51.637611 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:50:51.637625 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 8 23:50:51.637638 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 8 23:50:51.637651 systemd[1]: Starting systemd-journald.service... Feb 8 23:50:51.637663 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:50:51.637675 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:50:51.637687 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:50:51.637699 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:50:51.637712 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:50:51.637724 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:50:51.637736 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:50:51.637748 systemd[1]: Mounted media.mount. Feb 8 23:50:51.637767 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:50:51.637780 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:50:51.637792 systemd[1]: Mounted tmp.mount. Feb 8 23:50:51.637803 kernel: fuse: init (API version 7.34) Feb 8 23:50:51.637815 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:50:51.637829 kernel: audit: type=1130 audit(1707436251.590:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.637841 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:50:51.637852 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:50:51.637865 kernel: audit: type=1130 audit(1707436251.600:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.637879 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:50:51.637892 kernel: audit: type=1131 audit(1707436251.600:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.637903 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:50:51.637915 kernel: audit: type=1130 audit(1707436251.612:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.637927 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:50:51.637939 kernel: audit: type=1131 audit(1707436251.612:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.637951 systemd[1]: Finished modprobe@drm.service. Feb 8 23:50:51.637964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:50:51.637979 kernel: audit: type=1305 audit(1707436251.621:95): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:50:51.637990 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:50:51.638002 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:50:51.638014 kernel: audit: type=1300 audit(1707436251.621:95): arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd7d25cd90 a2=4000 a3=7ffd7d25ce2c items=0 ppid=1 pid=958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:50:51.638026 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:50:51.638038 kernel: audit: type=1327 audit(1707436251.621:95): proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:50:51.638051 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:50:51.638066 systemd-journald[958]: Journal started Feb 8 23:50:51.638109 systemd-journald[958]: Runtime Journal (/run/log/journal/6b37762427fd4a67bccf32090bdb3864) is 4.9M, max 39.5M, 34.5M free. Feb 8 23:50:51.429000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:50:51.429000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 8 23:50:51.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.621000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:50:51.621000 audit[958]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd7d25cd90 a2=4000 a3=7ffd7d25ce2c items=0 ppid=1 pid=958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:50:51.621000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:50:51.641086 systemd[1]: Started systemd-journald.service. Feb 8 23:50:51.641530 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:50:51.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.642779 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:50:51.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.643553 systemd[1]: Reached target network-pre.target. Feb 8 23:50:51.645448 kernel: loop: module loaded Feb 8 23:50:51.648341 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:50:51.651090 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:50:51.651588 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:50:51.656262 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:50:51.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.681455 systemd-journald[958]: Time spent on flushing to /var/log/journal/6b37762427fd4a67bccf32090bdb3864 is 59.229ms for 1070 entries. Feb 8 23:50:51.681455 systemd-journald[958]: System Journal (/var/log/journal/6b37762427fd4a67bccf32090bdb3864) is 8.0M, max 584.8M, 576.8M free. Feb 8 23:50:52.027352 systemd-journald[958]: Received client request to flush runtime journal. Feb 8 23:50:51.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.660598 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:50:51.661180 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:50:52.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:51.663414 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:50:51.667043 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:50:51.671589 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:50:51.671769 systemd[1]: Finished modprobe@loop.service. Feb 8 23:50:52.035456 udevadm[1014]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 8 23:50:51.672428 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:50:51.672988 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:50:51.673583 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:50:51.700140 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:50:51.702403 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:50:51.729097 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:50:51.748482 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:50:51.750719 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:50:51.893740 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:50:51.895278 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:50:51.991270 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:50:51.995954 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:50:52.029262 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:50:52.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:52.061535 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:50:52.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:52.598812 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:50:52.602609 systemd[1]: Starting systemd-udevd.service... Feb 8 23:50:52.651236 systemd-udevd[1023]: Using default interface naming scheme 'v252'. Feb 8 23:50:52.711766 systemd[1]: Started systemd-udevd.service. Feb 8 23:50:52.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:52.716892 systemd[1]: Starting systemd-networkd.service... Feb 8 23:50:52.739447 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:50:52.804159 systemd[1]: Found device dev-ttyS0.device. Feb 8 23:50:52.826722 systemd[1]: Started systemd-userdbd.service. Feb 8 23:50:52.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:52.859217 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:50:52.890342 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 8 23:50:52.920339 kernel: ACPI: button: Power Button [PWRF] Feb 8 23:50:52.905000 audit[1035]: AVC avc: denied { confidentiality } for pid=1035 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:50:52.937115 systemd-networkd[1029]: lo: Link UP Feb 8 23:50:52.937126 systemd-networkd[1029]: lo: Gained carrier Feb 8 23:50:52.937724 systemd-networkd[1029]: Enumeration completed Feb 8 23:50:52.937853 systemd[1]: Started systemd-networkd.service. Feb 8 23:50:52.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:52.938602 systemd-networkd[1029]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:50:52.940372 systemd-networkd[1029]: eth0: Link UP Feb 8 23:50:52.940382 systemd-networkd[1029]: eth0: Gained carrier Feb 8 23:50:52.905000 audit[1035]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56459941b630 a1=32194 a2=7fb648bedbc5 a3=5 items=108 ppid=1023 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:50:52.905000 audit: CWD cwd="/" Feb 8 23:50:52.905000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=1 name=(null) inode=14427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=2 name=(null) inode=14427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=3 name=(null) inode=14428 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=4 name=(null) inode=14427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=5 name=(null) inode=14429 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=6 name=(null) inode=14427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=7 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=8 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=9 name=(null) inode=14431 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=10 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=11 name=(null) inode=14432 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=12 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=13 name=(null) inode=14433 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=14 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=15 name=(null) inode=14434 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=16 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=17 name=(null) inode=14435 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=18 name=(null) inode=14427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=19 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=20 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=21 name=(null) inode=14437 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=22 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=23 name=(null) inode=14438 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=24 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=25 name=(null) inode=14439 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=26 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=27 name=(null) inode=14440 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=28 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=29 name=(null) inode=14441 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=30 name=(null) inode=14427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=31 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=32 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=33 name=(null) inode=14443 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=34 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=35 name=(null) inode=14444 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=36 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=37 name=(null) inode=14445 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=38 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=39 name=(null) inode=14446 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=40 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=41 name=(null) inode=14447 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=42 name=(null) inode=14427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=43 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=44 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=45 name=(null) inode=14449 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=46 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=47 name=(null) inode=14450 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=48 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=49 name=(null) inode=14451 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=50 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=51 name=(null) inode=14452 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=52 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=53 name=(null) inode=14453 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.949600 systemd-networkd[1029]: eth0: DHCPv4 address 172.24.4.64/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 8 23:50:52.905000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=55 name=(null) inode=14454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=56 name=(null) inode=14454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=57 name=(null) inode=14455 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=58 name=(null) inode=14454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=59 name=(null) inode=14456 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=60 name=(null) inode=14454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=61 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=62 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=63 name=(null) inode=14458 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=64 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=65 name=(null) inode=14459 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=66 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=67 name=(null) inode=14460 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=68 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=69 name=(null) inode=14461 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=70 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=71 name=(null) inode=14462 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=72 name=(null) inode=14454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=73 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=74 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=75 name=(null) inode=14464 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=76 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=77 name=(null) inode=14465 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=78 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=79 name=(null) inode=14466 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=80 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=81 name=(null) inode=14467 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=82 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=83 name=(null) inode=14468 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=84 name=(null) inode=14454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=85 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=86 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=87 name=(null) inode=14470 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=88 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=89 name=(null) inode=14471 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=90 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=91 name=(null) inode=14472 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=92 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=93 name=(null) inode=14473 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=94 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=95 name=(null) inode=14474 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=96 name=(null) inode=14454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=97 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=98 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=99 name=(null) inode=14476 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=100 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=101 name=(null) inode=14477 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=102 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=103 name=(null) inode=14478 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=104 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=105 name=(null) inode=14479 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=106 name=(null) inode=14475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PATH item=107 name=(null) inode=14480 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:50:52.905000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:50:52.960314 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 8 23:50:52.966689 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 8 23:50:52.966899 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:50:53.007849 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:50:53.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:53.009738 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:50:53.039794 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:50:53.066396 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:50:53.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:53.067017 systemd[1]: Reached target cryptsetup.target. Feb 8 23:50:53.068641 systemd[1]: Starting lvm2-activation.service... Feb 8 23:50:53.073523 lvm[1055]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:50:53.104455 systemd[1]: Finished lvm2-activation.service. Feb 8 23:50:53.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:53.105045 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:50:53.105491 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:50:53.105516 systemd[1]: Reached target local-fs.target. Feb 8 23:50:53.105926 systemd[1]: Reached target machines.target. Feb 8 23:50:53.107587 systemd[1]: Starting ldconfig.service... Feb 8 23:50:53.109521 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:50:53.109571 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:50:53.110699 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:50:53.112113 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:50:53.114378 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:50:53.115440 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:50:53.115493 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:50:53.117200 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:50:53.135573 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1058 (bootctl) Feb 8 23:50:53.140152 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:50:53.182411 systemd-tmpfiles[1061]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:50:53.184089 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:50:53.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:53.217586 systemd-tmpfiles[1061]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:50:53.223650 systemd-tmpfiles[1061]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:50:54.371786 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:50:54.374545 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:50:54.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:54.509801 systemd-fsck[1067]: fsck.fat 4.2 (2021-01-31) Feb 8 23:50:54.509801 systemd-fsck[1067]: /dev/vda1: 789 files, 115332/258078 clusters Feb 8 23:50:54.515045 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:50:54.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:54.519751 systemd[1]: Mounting boot.mount... Feb 8 23:50:54.558515 systemd[1]: Mounted boot.mount. Feb 8 23:50:54.608455 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:50:54.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:54.610913 systemd-networkd[1029]: eth0: Gained IPv6LL Feb 8 23:50:54.660041 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:50:54.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:54.662015 systemd[1]: Starting audit-rules.service... Feb 8 23:50:54.663465 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:50:54.665289 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:50:54.668387 systemd[1]: Starting systemd-resolved.service... Feb 8 23:50:54.674177 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:50:54.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:54.678009 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:50:54.679339 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:50:54.680745 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:50:54.688000 audit[1081]: SYSTEM_BOOT pid=1081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:50:54.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:54.689745 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:50:54.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:50:54.724454 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:50:54.753000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:50:54.753000 audit[1098]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd303746b0 a2=420 a3=0 items=0 ppid=1075 pid=1098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:50:54.753000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:50:54.754098 augenrules[1098]: No rules Feb 8 23:50:54.754562 systemd[1]: Finished audit-rules.service. Feb 8 23:50:54.793088 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:50:54.793823 systemd[1]: Reached target time-set.target. Feb 8 23:50:54.802256 systemd-resolved[1078]: Positive Trust Anchors: Feb 8 23:50:54.802273 systemd-resolved[1078]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:50:54.802333 systemd-resolved[1078]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:50:54.819024 systemd-resolved[1078]: Using system hostname 'ci-3510-3-2-a-bd3a159777.novalocal'. Feb 8 23:50:54.820994 systemd[1]: Started systemd-resolved.service. Feb 8 23:50:54.821543 systemd[1]: Reached target network.target. Feb 8 23:50:54.825732 systemd[1]: Reached target nss-lookup.target. Feb 8 23:50:54.968036 ldconfig[1057]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:50:54.976516 systemd[1]: Finished ldconfig.service. Feb 8 23:50:54.978488 systemd[1]: Starting systemd-update-done.service... Feb 8 23:50:54.986171 systemd[1]: Finished systemd-update-done.service. Feb 8 23:50:54.986773 systemd[1]: Reached target sysinit.target. Feb 8 23:50:54.987285 systemd[1]: Started motdgen.path. Feb 8 23:50:54.987739 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:50:54.988400 systemd[1]: Started logrotate.timer. Feb 8 23:50:54.988893 systemd[1]: Started mdadm.timer. Feb 8 23:50:54.989289 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:50:54.989769 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:50:54.989800 systemd[1]: Reached target paths.target. Feb 8 23:50:54.990197 systemd[1]: Reached target timers.target. Feb 8 23:50:54.990980 systemd[1]: Listening on dbus.socket. Feb 8 23:50:54.992593 systemd[1]: Starting docker.socket... Feb 8 23:50:54.994907 systemd[1]: Listening on sshd.socket. Feb 8 23:50:54.995462 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:50:54.995762 systemd[1]: Listening on docker.socket. Feb 8 23:50:54.996200 systemd[1]: Reached target sockets.target. Feb 8 23:50:54.996634 systemd[1]: Reached target basic.target. Feb 8 23:50:54.997144 systemd[1]: System is tainted: cgroupsv1 Feb 8 23:50:54.997189 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:50:54.997210 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:50:54.998164 systemd[1]: Starting containerd.service... Feb 8 23:50:55.000052 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 8 23:50:55.002554 systemd[1]: Starting dbus.service... Feb 8 23:50:55.006658 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:50:55.011964 systemd[1]: Starting extend-filesystems.service... Feb 8 23:50:55.013513 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:50:55.066225 jq[1113]: false Feb 8 23:50:55.015774 systemd[1]: Starting motdgen.service... Feb 8 23:50:55.019423 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:50:55.024753 systemd[1]: Starting prepare-critools.service... Feb 8 23:50:55.026961 systemd[1]: Starting prepare-helm.service... Feb 8 23:50:55.029944 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:50:55.033563 systemd[1]: Starting sshd-keygen.service... Feb 8 23:50:55.035795 systemd[1]: Starting systemd-logind.service... Feb 8 23:50:55.039366 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:50:55.039611 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:50:55.042185 systemd[1]: Starting update-engine.service... Feb 8 23:50:55.049025 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:50:55.050863 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:50:55.051090 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:50:55.058688 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:50:55.059050 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:50:55.061389 systemd[1]: Created slice system-sshd.slice. Feb 8 23:50:55.074119 systemd-timesyncd[1080]: Contacted time server 51.178.79.86:123 (0.flatcar.pool.ntp.org). Feb 8 23:50:55.074181 systemd-timesyncd[1080]: Initial clock synchronization to Thu 2024-02-08 23:50:55.244159 UTC. Feb 8 23:50:55.102014 jq[1130]: true Feb 8 23:50:55.110683 tar[1132]: ./ Feb 8 23:50:55.110683 tar[1132]: ./macvlan Feb 8 23:50:55.111039 tar[1134]: linux-amd64/helm Feb 8 23:50:55.111203 tar[1133]: crictl Feb 8 23:50:55.125733 extend-filesystems[1117]: Found vda Feb 8 23:50:55.126836 extend-filesystems[1117]: Found vda1 Feb 8 23:50:55.127939 extend-filesystems[1117]: Found vda2 Feb 8 23:50:55.129134 extend-filesystems[1117]: Found vda3 Feb 8 23:50:55.129134 extend-filesystems[1117]: Found usr Feb 8 23:50:55.129134 extend-filesystems[1117]: Found vda4 Feb 8 23:50:55.129134 extend-filesystems[1117]: Found vda6 Feb 8 23:50:55.129134 extend-filesystems[1117]: Found vda7 Feb 8 23:50:55.129134 extend-filesystems[1117]: Found vda9 Feb 8 23:50:55.129134 extend-filesystems[1117]: Checking size of /dev/vda9 Feb 8 23:50:55.145611 jq[1147]: true Feb 8 23:50:55.146661 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:50:55.146918 systemd[1]: Finished motdgen.service. Feb 8 23:50:55.150563 dbus-daemon[1112]: [system] SELinux support is enabled Feb 8 23:50:55.150715 systemd[1]: Started dbus.service. Feb 8 23:50:55.153053 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:50:55.153074 systemd[1]: Reached target system-config.target. Feb 8 23:50:55.154518 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:50:55.154533 systemd[1]: Reached target user-config.target. Feb 8 23:50:55.163588 extend-filesystems[1117]: Resized partition /dev/vda9 Feb 8 23:50:55.171995 extend-filesystems[1166]: resize2fs 1.46.5 (30-Dec-2021) Feb 8 23:50:55.192115 update_engine[1129]: I0208 23:50:55.190574 1129 main.cc:92] Flatcar Update Engine starting Feb 8 23:50:55.205556 systemd[1]: Started update-engine.service. Feb 8 23:50:55.205940 update_engine[1129]: I0208 23:50:55.205675 1129 update_check_scheduler.cc:74] Next update check in 11m10s Feb 8 23:50:55.207721 systemd[1]: Started locksmithd.service. Feb 8 23:50:55.208641 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Feb 8 23:50:55.295455 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Feb 8 23:50:55.372946 coreos-metadata[1111]: Feb 08 23:50:55.304 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 8 23:50:55.376951 extend-filesystems[1166]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 8 23:50:55.376951 extend-filesystems[1166]: old_desc_blocks = 1, new_desc_blocks = 3 Feb 8 23:50:55.376951 extend-filesystems[1166]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Feb 8 23:50:55.398286 extend-filesystems[1117]: Resized filesystem in /dev/vda9 Feb 8 23:50:55.400068 bash[1181]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:50:55.377132 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:50:55.400425 env[1139]: time="2024-02-08T23:50:55.380445160Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:50:55.377425 systemd[1]: Finished extend-filesystems.service. Feb 8 23:50:55.377985 systemd-logind[1126]: Watching system buttons on /dev/input/event1 (Power Button) Feb 8 23:50:55.378004 systemd-logind[1126]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:50:55.381578 systemd-logind[1126]: New seat seat0. Feb 8 23:50:55.394791 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:50:55.401146 systemd[1]: Started systemd-logind.service. Feb 8 23:50:55.437554 tar[1132]: ./static Feb 8 23:50:55.516212 env[1139]: time="2024-02-08T23:50:55.516154322Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:50:55.522558 env[1139]: time="2024-02-08T23:50:55.522529880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:50:55.524868 coreos-metadata[1111]: Feb 08 23:50:55.524 INFO Fetch successful Feb 8 23:50:55.524868 coreos-metadata[1111]: Feb 08 23:50:55.524 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 8 23:50:55.525215 env[1139]: time="2024-02-08T23:50:55.525181343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:50:55.527350 env[1139]: time="2024-02-08T23:50:55.527331215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:50:55.528459 env[1139]: time="2024-02-08T23:50:55.528427030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:50:55.528575 env[1139]: time="2024-02-08T23:50:55.528544220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:50:55.528652 env[1139]: time="2024-02-08T23:50:55.528632946Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:50:55.528718 env[1139]: time="2024-02-08T23:50:55.528702386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:50:55.528875 env[1139]: time="2024-02-08T23:50:55.528856545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:50:55.529250 env[1139]: time="2024-02-08T23:50:55.529229615Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:50:55.529519 env[1139]: time="2024-02-08T23:50:55.529493831Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:50:55.529604 env[1139]: time="2024-02-08T23:50:55.529586905Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:50:55.529730 env[1139]: time="2024-02-08T23:50:55.529710196Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:50:55.529824 env[1139]: time="2024-02-08T23:50:55.529806467Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:50:55.539123 tar[1132]: ./vlan Feb 8 23:50:55.542215 coreos-metadata[1111]: Feb 08 23:50:55.541 INFO Fetch successful Feb 8 23:50:55.542832 env[1139]: time="2024-02-08T23:50:55.542764830Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:50:55.542924 env[1139]: time="2024-02-08T23:50:55.542904893Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:50:55.543011 env[1139]: time="2024-02-08T23:50:55.542995333Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:50:55.543118 env[1139]: time="2024-02-08T23:50:55.543100640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:50:55.543256 env[1139]: time="2024-02-08T23:50:55.543239721Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:50:55.543358 env[1139]: time="2024-02-08T23:50:55.543341943Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:50:55.543431 env[1139]: time="2024-02-08T23:50:55.543415731Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:50:55.543515 env[1139]: time="2024-02-08T23:50:55.543498156Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:50:55.543609 env[1139]: time="2024-02-08T23:50:55.543593625Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:50:55.543686 env[1139]: time="2024-02-08T23:50:55.543670389Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:50:55.543761 env[1139]: time="2024-02-08T23:50:55.543746111Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:50:55.543834 env[1139]: time="2024-02-08T23:50:55.543819458Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:50:55.544245 env[1139]: time="2024-02-08T23:50:55.544224768Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:50:55.544440 env[1139]: time="2024-02-08T23:50:55.544420546Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:50:55.544944 env[1139]: time="2024-02-08T23:50:55.544923870Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:50:55.545040 env[1139]: time="2024-02-08T23:50:55.545022905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:50:55.545128 env[1139]: time="2024-02-08T23:50:55.545111041Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:50:55.545270 env[1139]: time="2024-02-08T23:50:55.545251825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:50:55.545384 env[1139]: time="2024-02-08T23:50:55.545367001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:50:55.545624 env[1139]: time="2024-02-08T23:50:55.545534735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:50:55.545624 env[1139]: time="2024-02-08T23:50:55.545602863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:50:55.545694 env[1139]: time="2024-02-08T23:50:55.545624494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:50:55.545694 env[1139]: time="2024-02-08T23:50:55.545644601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:50:55.545694 env[1139]: time="2024-02-08T23:50:55.545659970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:50:55.545694 env[1139]: time="2024-02-08T23:50:55.545673916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:50:55.545694 env[1139]: time="2024-02-08T23:50:55.545693263Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:50:55.545883 env[1139]: time="2024-02-08T23:50:55.545855838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:50:55.545927 env[1139]: time="2024-02-08T23:50:55.545889260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:50:55.545927 env[1139]: time="2024-02-08T23:50:55.545907925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:50:55.545927 env[1139]: time="2024-02-08T23:50:55.545922372Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:50:55.546006 env[1139]: time="2024-02-08T23:50:55.545941819Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:50:55.546006 env[1139]: time="2024-02-08T23:50:55.545956837Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:50:55.546006 env[1139]: time="2024-02-08T23:50:55.545977025Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:50:55.546087 env[1139]: time="2024-02-08T23:50:55.546024925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:50:55.546368 env[1139]: time="2024-02-08T23:50:55.546275354Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:50:55.553660 env[1139]: time="2024-02-08T23:50:55.546373318Z" level=info msg="Connect containerd service" Feb 8 23:50:55.553660 env[1139]: time="2024-02-08T23:50:55.546413584Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:50:55.553660 env[1139]: time="2024-02-08T23:50:55.546979615Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:50:55.553660 env[1139]: time="2024-02-08T23:50:55.548466334Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:50:55.553660 env[1139]: time="2024-02-08T23:50:55.548513572Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:50:55.553660 env[1139]: time="2024-02-08T23:50:55.548563886Z" level=info msg="containerd successfully booted in 0.225096s" Feb 8 23:50:55.553660 env[1139]: time="2024-02-08T23:50:55.549985934Z" level=info msg="Start subscribing containerd event" Feb 8 23:50:55.553660 env[1139]: time="2024-02-08T23:50:55.550065783Z" level=info msg="Start recovering state" Feb 8 23:50:55.553660 env[1139]: time="2024-02-08T23:50:55.550157876Z" level=info msg="Start event monitor" Feb 8 23:50:55.553660 env[1139]: time="2024-02-08T23:50:55.550268654Z" level=info msg="Start snapshots syncer" Feb 8 23:50:55.553660 env[1139]: time="2024-02-08T23:50:55.550283211Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:50:55.553660 env[1139]: time="2024-02-08T23:50:55.550334457Z" level=info msg="Start streaming server" Feb 8 23:50:55.547614 unknown[1111]: wrote ssh authorized keys file for user: core Feb 8 23:50:55.548695 systemd[1]: Started containerd.service. Feb 8 23:50:55.591319 update-ssh-keys[1191]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:50:55.592063 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 8 23:50:55.649280 tar[1132]: ./portmap Feb 8 23:50:55.713278 tar[1132]: ./host-local Feb 8 23:50:55.772518 tar[1132]: ./vrf Feb 8 23:50:55.853063 tar[1132]: ./bridge Feb 8 23:50:55.946766 tar[1132]: ./tuning Feb 8 23:50:56.007302 tar[1132]: ./firewall Feb 8 23:50:56.097238 tar[1132]: ./host-device Feb 8 23:50:56.170263 tar[1132]: ./sbr Feb 8 23:50:56.214347 tar[1134]: linux-amd64/LICENSE Feb 8 23:50:56.225527 tar[1134]: linux-amd64/README.md Feb 8 23:50:56.230391 systemd[1]: Finished prepare-helm.service. Feb 8 23:50:56.237344 tar[1132]: ./loopback Feb 8 23:50:56.304510 tar[1132]: ./dhcp Feb 8 23:50:56.310860 systemd[1]: Finished prepare-critools.service. Feb 8 23:50:56.364079 sshd_keygen[1158]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:50:56.388937 systemd[1]: Finished sshd-keygen.service. Feb 8 23:50:56.391013 systemd[1]: Starting issuegen.service... Feb 8 23:50:56.392484 systemd[1]: Started sshd@0-172.24.4.64:22-172.24.4.1:52742.service. Feb 8 23:50:56.403775 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:50:56.404056 systemd[1]: Finished issuegen.service. Feb 8 23:50:56.406389 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:50:56.411132 tar[1132]: ./ptp Feb 8 23:50:56.417379 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:50:56.419568 systemd[1]: Started getty@tty1.service. Feb 8 23:50:56.421393 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:50:56.422104 systemd[1]: Reached target getty.target. Feb 8 23:50:56.455052 tar[1132]: ./ipvlan Feb 8 23:50:56.477633 locksmithd[1174]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:50:56.492296 tar[1132]: ./bandwidth Feb 8 23:50:56.603370 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:50:56.605297 systemd[1]: Reached target multi-user.target. Feb 8 23:50:56.609933 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:50:56.626420 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:50:56.627299 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:50:56.631496 systemd[1]: Startup finished in 13.080s (kernel) + 9.746s (userspace) = 22.827s. Feb 8 23:50:58.181725 sshd[1215]: Accepted publickey for core from 172.24.4.1 port 52742 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:50:58.186276 sshd[1215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:50:58.213108 systemd[1]: Created slice user-500.slice. Feb 8 23:50:58.215621 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:50:58.227147 systemd-logind[1126]: New session 1 of user core. Feb 8 23:50:58.240953 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:50:58.244359 systemd[1]: Starting user@500.service... Feb 8 23:50:58.256180 (systemd)[1235]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:50:58.384520 systemd[1235]: Queued start job for default target default.target. Feb 8 23:50:58.385657 systemd[1235]: Reached target paths.target. Feb 8 23:50:58.385788 systemd[1235]: Reached target sockets.target. Feb 8 23:50:58.385879 systemd[1235]: Reached target timers.target. Feb 8 23:50:58.385961 systemd[1235]: Reached target basic.target. Feb 8 23:50:58.386160 systemd[1]: Started user@500.service. Feb 8 23:50:58.387163 systemd[1]: Started session-1.scope. Feb 8 23:50:58.391715 systemd[1235]: Reached target default.target. Feb 8 23:50:58.392386 systemd[1235]: Startup finished in 122ms. Feb 8 23:50:58.926671 systemd[1]: Started sshd@1-172.24.4.64:22-172.24.4.1:52754.service. Feb 8 23:51:00.450354 sshd[1244]: Accepted publickey for core from 172.24.4.1 port 52754 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:51:00.453694 sshd[1244]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:51:00.464807 systemd[1]: Started session-2.scope. Feb 8 23:51:00.465246 systemd-logind[1126]: New session 2 of user core. Feb 8 23:51:01.102818 sshd[1244]: pam_unix(sshd:session): session closed for user core Feb 8 23:51:01.106200 systemd[1]: Started sshd@2-172.24.4.64:22-172.24.4.1:52762.service. Feb 8 23:51:01.113381 systemd[1]: sshd@1-172.24.4.64:22-172.24.4.1:52754.service: Deactivated successfully. Feb 8 23:51:01.114581 systemd[1]: session-2.scope: Deactivated successfully. Feb 8 23:51:01.119047 systemd-logind[1126]: Session 2 logged out. Waiting for processes to exit. Feb 8 23:51:01.121042 systemd-logind[1126]: Removed session 2. Feb 8 23:51:02.648818 sshd[1249]: Accepted publickey for core from 172.24.4.1 port 52762 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:51:02.652185 sshd[1249]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:51:02.663464 systemd[1]: Started session-3.scope. Feb 8 23:51:02.664461 systemd-logind[1126]: New session 3 of user core. Feb 8 23:51:03.297957 sshd[1249]: pam_unix(sshd:session): session closed for user core Feb 8 23:51:03.303212 systemd[1]: Started sshd@3-172.24.4.64:22-172.24.4.1:52770.service. Feb 8 23:51:03.309242 systemd[1]: sshd@2-172.24.4.64:22-172.24.4.1:52762.service: Deactivated successfully. Feb 8 23:51:03.310848 systemd[1]: session-3.scope: Deactivated successfully. Feb 8 23:51:03.313667 systemd-logind[1126]: Session 3 logged out. Waiting for processes to exit. Feb 8 23:51:03.317183 systemd-logind[1126]: Removed session 3. Feb 8 23:51:04.838932 sshd[1256]: Accepted publickey for core from 172.24.4.1 port 52770 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:51:04.841541 sshd[1256]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:51:04.853110 systemd[1]: Started session-4.scope. Feb 8 23:51:04.853955 systemd-logind[1126]: New session 4 of user core. Feb 8 23:51:05.487998 sshd[1256]: pam_unix(sshd:session): session closed for user core Feb 8 23:51:05.492466 systemd[1]: Started sshd@4-172.24.4.64:22-172.24.4.1:47254.service. Feb 8 23:51:05.500003 systemd[1]: sshd@3-172.24.4.64:22-172.24.4.1:52770.service: Deactivated successfully. Feb 8 23:51:05.502101 systemd-logind[1126]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:51:05.502256 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:51:05.505621 systemd-logind[1126]: Removed session 4. Feb 8 23:51:06.977878 sshd[1263]: Accepted publickey for core from 172.24.4.1 port 47254 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:51:06.981185 sshd[1263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:51:06.991242 systemd-logind[1126]: New session 5 of user core. Feb 8 23:51:06.992068 systemd[1]: Started session-5.scope. Feb 8 23:51:07.590419 sudo[1269]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 8 23:51:07.590925 sudo[1269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:51:07.604745 dbus-daemon[1112]: \xd0M\xbeyaU: received setenforce notice (enforcing=243747680) Feb 8 23:51:07.606759 sudo[1269]: pam_unix(sudo:session): session closed for user root Feb 8 23:51:07.777940 sshd[1263]: pam_unix(sshd:session): session closed for user core Feb 8 23:51:07.780293 systemd[1]: Started sshd@5-172.24.4.64:22-172.24.4.1:47270.service. Feb 8 23:51:07.786708 systemd[1]: sshd@4-172.24.4.64:22-172.24.4.1:47254.service: Deactivated successfully. Feb 8 23:51:07.789620 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:51:07.790822 systemd-logind[1126]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:51:07.798675 systemd-logind[1126]: Removed session 5. Feb 8 23:51:09.202438 sshd[1271]: Accepted publickey for core from 172.24.4.1 port 47270 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:51:09.204743 sshd[1271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:51:09.214110 systemd-logind[1126]: New session 6 of user core. Feb 8 23:51:09.215574 systemd[1]: Started session-6.scope. Feb 8 23:51:09.655283 sudo[1278]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 8 23:51:09.656352 sudo[1278]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:51:09.662497 sudo[1278]: pam_unix(sudo:session): session closed for user root Feb 8 23:51:09.672243 sudo[1277]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 8 23:51:09.672771 sudo[1277]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:51:09.693349 systemd[1]: Stopping audit-rules.service... Feb 8 23:51:09.695000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 8 23:51:09.698028 kernel: kauditd_printk_skb: 150 callbacks suppressed Feb 8 23:51:09.698156 kernel: audit: type=1305 audit(1707436269.695:133): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 8 23:51:09.703606 kernel: audit: type=1300 audit(1707436269.695:133): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffca12a4490 a2=420 a3=0 items=0 ppid=1 pid=1281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:09.695000 audit[1281]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffca12a4490 a2=420 a3=0 items=0 ppid=1 pid=1281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:09.703875 auditctl[1281]: No rules Feb 8 23:51:09.695000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 8 23:51:09.718357 kernel: audit: type=1327 audit(1707436269.695:133): proctitle=2F7362696E2F617564697463746C002D44 Feb 8 23:51:09.714865 systemd[1]: audit-rules.service: Deactivated successfully. Feb 8 23:51:09.715415 systemd[1]: Stopped audit-rules.service. Feb 8 23:51:09.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:09.719597 systemd[1]: Starting audit-rules.service... Feb 8 23:51:09.728796 kernel: audit: type=1131 audit(1707436269.715:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:09.759090 augenrules[1299]: No rules Feb 8 23:51:09.761148 systemd[1]: Finished audit-rules.service. Feb 8 23:51:09.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:09.763493 sudo[1277]: pam_unix(sudo:session): session closed for user root Feb 8 23:51:09.762000 audit[1277]: USER_END pid=1277 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:51:09.779494 kernel: audit: type=1130 audit(1707436269.760:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:09.779580 kernel: audit: type=1106 audit(1707436269.762:136): pid=1277 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:51:09.779660 kernel: audit: type=1104 audit(1707436269.762:137): pid=1277 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:51:09.762000 audit[1277]: CRED_DISP pid=1277 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:51:09.936865 sshd[1271]: pam_unix(sshd:session): session closed for user core Feb 8 23:51:09.942717 systemd[1]: Started sshd@6-172.24.4.64:22-172.24.4.1:47280.service. Feb 8 23:51:09.941000 audit[1271]: USER_END pid=1271 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:51:09.959631 kernel: audit: type=1106 audit(1707436269.941:138): pid=1271 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:51:09.959829 kernel: audit: type=1104 audit(1707436269.941:139): pid=1271 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:51:09.941000 audit[1271]: CRED_DISP pid=1271 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:51:09.956631 systemd-logind[1126]: Session 6 logged out. Waiting for processes to exit. Feb 8 23:51:09.957125 systemd[1]: sshd@5-172.24.4.64:22-172.24.4.1:47270.service: Deactivated successfully. Feb 8 23:51:09.958847 systemd[1]: session-6.scope: Deactivated successfully. Feb 8 23:51:09.967995 systemd-logind[1126]: Removed session 6. Feb 8 23:51:09.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.64:22-172.24.4.1:47280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:09.984400 kernel: audit: type=1130 audit(1707436269.942:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.64:22-172.24.4.1:47280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:09.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.24.4.64:22-172.24.4.1:47270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:11.146000 audit[1304]: USER_ACCT pid=1304 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:51:11.147240 sshd[1304]: Accepted publickey for core from 172.24.4.1 port 47280 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:51:11.148000 audit[1304]: CRED_ACQ pid=1304 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:51:11.148000 audit[1304]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe6520a420 a2=3 a3=0 items=0 ppid=1 pid=1304 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:11.148000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:51:11.150061 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:51:11.160569 systemd-logind[1126]: New session 7 of user core. Feb 8 23:51:11.161063 systemd[1]: Started session-7.scope. Feb 8 23:51:11.173000 audit[1304]: USER_START pid=1304 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:51:11.177000 audit[1309]: CRED_ACQ pid=1309 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:51:11.636000 audit[1310]: USER_ACCT pid=1310 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:51:11.636000 audit[1310]: CRED_REFR pid=1310 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:51:11.637782 sudo[1310]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:51:11.638231 sudo[1310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:51:11.641000 audit[1310]: USER_START pid=1310 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:51:12.334698 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:51:12.347832 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:51:12.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:12.350620 systemd[1]: Reached target network-online.target. Feb 8 23:51:12.355176 systemd[1]: Starting docker.service... Feb 8 23:51:12.421261 env[1327]: time="2024-02-08T23:51:12.421189868Z" level=info msg="Starting up" Feb 8 23:51:12.424364 env[1327]: time="2024-02-08T23:51:12.424241400Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:51:12.424539 env[1327]: time="2024-02-08T23:51:12.424502292Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:51:12.424758 env[1327]: time="2024-02-08T23:51:12.424712110Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:51:12.424914 env[1327]: time="2024-02-08T23:51:12.424870674Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:51:12.434507 env[1327]: time="2024-02-08T23:51:12.434454792Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:51:12.434507 env[1327]: time="2024-02-08T23:51:12.434488810Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:51:12.434799 env[1327]: time="2024-02-08T23:51:12.434518460Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:51:12.434799 env[1327]: time="2024-02-08T23:51:12.434533335Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:51:12.554595 env[1327]: time="2024-02-08T23:51:12.554498547Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 8 23:51:12.554595 env[1327]: time="2024-02-08T23:51:12.554560257Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 8 23:51:12.555015 env[1327]: time="2024-02-08T23:51:12.554924732Z" level=info msg="Loading containers: start." Feb 8 23:51:12.804000 audit[1357]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:12.804000 audit[1357]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd8a4fce50 a2=0 a3=7ffd8a4fce3c items=0 ppid=1327 pid=1357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:12.804000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 8 23:51:12.810000 audit[1359]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1359 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:12.810000 audit[1359]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe8f5ad9a0 a2=0 a3=7ffe8f5ad98c items=0 ppid=1327 pid=1359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:12.810000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 8 23:51:12.814000 audit[1361]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1361 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:12.814000 audit[1361]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd0616b220 a2=0 a3=7ffd0616b20c items=0 ppid=1327 pid=1361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:12.814000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 8 23:51:12.819000 audit[1363]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1363 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:12.819000 audit[1363]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd7d9b42b0 a2=0 a3=7ffd7d9b429c items=0 ppid=1327 pid=1363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:12.819000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 8 23:51:12.826000 audit[1365]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1365 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:12.826000 audit[1365]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd9b3de7e0 a2=0 a3=7ffd9b3de7cc items=0 ppid=1327 pid=1365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:12.826000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 8 23:51:12.851000 audit[1370]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1370 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:12.851000 audit[1370]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd12bbf680 a2=0 a3=7ffd12bbf66c items=0 ppid=1327 pid=1370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:12.851000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 8 23:51:12.868000 audit[1372]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1372 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:12.868000 audit[1372]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff42b952a0 a2=0 a3=7fff42b9528c items=0 ppid=1327 pid=1372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:12.868000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 8 23:51:12.873000 audit[1374]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1374 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:12.873000 audit[1374]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffeea4908f0 a2=0 a3=7ffeea4908dc items=0 ppid=1327 pid=1374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:12.873000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 8 23:51:12.877000 audit[1376]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1376 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:12.877000 audit[1376]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffe727b5f10 a2=0 a3=7ffe727b5efc items=0 ppid=1327 pid=1376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:12.877000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 8 23:51:12.895000 audit[1380]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1380 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:12.895000 audit[1380]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffcc2588f60 a2=0 a3=7ffcc2588f4c items=0 ppid=1327 pid=1380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:12.895000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 8 23:51:12.898000 audit[1381]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1381 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:12.898000 audit[1381]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc053a5590 a2=0 a3=7ffc053a557c items=0 ppid=1327 pid=1381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:12.898000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 8 23:51:12.923400 kernel: Initializing XFRM netlink socket Feb 8 23:51:13.024670 env[1327]: time="2024-02-08T23:51:13.024498214Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 8 23:51:13.086000 audit[1389]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1389 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:13.086000 audit[1389]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffc2e9c7820 a2=0 a3=7ffc2e9c780c items=0 ppid=1327 pid=1389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:13.086000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 8 23:51:13.108000 audit[1393]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1393 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:13.108000 audit[1393]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd8cb88400 a2=0 a3=7ffd8cb883ec items=0 ppid=1327 pid=1393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:13.108000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 8 23:51:13.112000 audit[1396]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1396 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:13.112000 audit[1396]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcd234d4d0 a2=0 a3=7ffcd234d4bc items=0 ppid=1327 pid=1396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:13.112000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 8 23:51:13.115000 audit[1398]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1398 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:13.115000 audit[1398]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd0b6647d0 a2=0 a3=7ffd0b6647bc items=0 ppid=1327 pid=1398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:13.115000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 8 23:51:13.118000 audit[1400]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1400 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:13.118000 audit[1400]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffd365365d0 a2=0 a3=7ffd365365bc items=0 ppid=1327 pid=1400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:13.118000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 8 23:51:13.120000 audit[1402]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1402 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:13.120000 audit[1402]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff19d08980 a2=0 a3=7fff19d0896c items=0 ppid=1327 pid=1402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:13.120000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 8 23:51:13.123000 audit[1404]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1404 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:13.123000 audit[1404]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffe4aa96f80 a2=0 a3=7ffe4aa96f6c items=0 ppid=1327 pid=1404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:13.123000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 8 23:51:13.141000 audit[1407]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1407 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:13.141000 audit[1407]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffec3b746f0 a2=0 a3=7ffec3b746dc items=0 ppid=1327 pid=1407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:13.141000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 8 23:51:13.144000 audit[1409]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:13.144000 audit[1409]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffc68bdf3d0 a2=0 a3=7ffc68bdf3bc items=0 ppid=1327 pid=1409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:13.144000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 8 23:51:13.149000 audit[1411]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1411 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:13.149000 audit[1411]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffe9402fa0 a2=0 a3=7fffe9402f8c items=0 ppid=1327 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:13.149000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 8 23:51:13.153000 audit[1413]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1413 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:13.153000 audit[1413]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc426af3f0 a2=0 a3=7ffc426af3dc items=0 ppid=1327 pid=1413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:13.153000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 8 23:51:13.156176 systemd-networkd[1029]: docker0: Link UP Feb 8 23:51:13.241000 audit[1417]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1417 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:13.241000 audit[1417]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc0cf34990 a2=0 a3=7ffc0cf3497c items=0 ppid=1327 pid=1417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:13.241000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 8 23:51:13.243000 audit[1418]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:13.243000 audit[1418]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcfd48c630 a2=0 a3=7ffcfd48c61c items=0 ppid=1327 pid=1418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:13.243000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 8 23:51:13.246525 env[1327]: time="2024-02-08T23:51:13.246418467Z" level=info msg="Loading containers: done." Feb 8 23:51:13.285011 env[1327]: time="2024-02-08T23:51:13.284908776Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 8 23:51:13.286014 env[1327]: time="2024-02-08T23:51:13.285947477Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 8 23:51:13.286542 env[1327]: time="2024-02-08T23:51:13.286477602Z" level=info msg="Daemon has completed initialization" Feb 8 23:51:13.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:13.318517 systemd[1]: Started docker.service. Feb 8 23:51:13.334787 env[1327]: time="2024-02-08T23:51:13.334695212Z" level=info msg="API listen on /run/docker.sock" Feb 8 23:51:13.377732 systemd[1]: Reloading. Feb 8 23:51:13.498136 /usr/lib/systemd/system-generators/torcx-generator[1467]: time="2024-02-08T23:51:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:51:13.498206 /usr/lib/systemd/system-generators/torcx-generator[1467]: time="2024-02-08T23:51:13Z" level=info msg="torcx already run" Feb 8 23:51:13.628421 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:51:13.628440 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:51:13.655190 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:51:13.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:13.730691 systemd[1]: Started kubelet.service. Feb 8 23:51:13.830115 kubelet[1518]: E0208 23:51:13.830044 1518 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:51:13.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 8 23:51:13.832820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:51:13.833000 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:51:14.775087 env[1139]: time="2024-02-08T23:51:14.774993595Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 8 23:51:15.746564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3709031276.mount: Deactivated successfully. Feb 8 23:51:18.944159 env[1139]: time="2024-02-08T23:51:18.942767261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:18.953282 env[1139]: time="2024-02-08T23:51:18.953157774Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:18.957955 env[1139]: time="2024-02-08T23:51:18.957848531Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:18.972620 env[1139]: time="2024-02-08T23:51:18.972471538Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:18.982385 env[1139]: time="2024-02-08T23:51:18.982251268Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 8 23:51:19.009140 env[1139]: time="2024-02-08T23:51:19.008992019Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 8 23:51:22.132049 env[1139]: time="2024-02-08T23:51:22.131179913Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:22.136294 env[1139]: time="2024-02-08T23:51:22.136214363Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:22.140783 env[1139]: time="2024-02-08T23:51:22.140718087Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:22.146809 env[1139]: time="2024-02-08T23:51:22.146751699Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:22.149949 env[1139]: time="2024-02-08T23:51:22.147835235Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 8 23:51:22.177237 env[1139]: time="2024-02-08T23:51:22.177129781Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 8 23:51:24.088415 kernel: kauditd_printk_skb: 87 callbacks suppressed Feb 8 23:51:24.088704 kernel: audit: type=1130 audit(1707436284.083:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:24.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:24.084084 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 8 23:51:24.084407 systemd[1]: Stopped kubelet.service. Feb 8 23:51:24.087558 systemd[1]: Started kubelet.service. Feb 8 23:51:24.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:24.094090 kernel: audit: type=1131 audit(1707436284.083:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:24.094139 kernel: audit: type=1130 audit(1707436284.086:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:24.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:24.151330 kubelet[1551]: E0208 23:51:24.151246 1551 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:51:24.157967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:51:24.158126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:51:24.166327 kernel: audit: type=1131 audit(1707436284.157:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 8 23:51:24.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 8 23:51:24.557827 env[1139]: time="2024-02-08T23:51:24.556479479Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:24.681416 env[1139]: time="2024-02-08T23:51:24.681347315Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:24.837782 env[1139]: time="2024-02-08T23:51:24.837161739Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:24.846252 env[1139]: time="2024-02-08T23:51:24.846172725Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:24.848581 env[1139]: time="2024-02-08T23:51:24.848502571Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 8 23:51:24.871142 env[1139]: time="2024-02-08T23:51:24.871080228Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 8 23:51:26.688394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796523550.mount: Deactivated successfully. Feb 8 23:51:27.485341 env[1139]: time="2024-02-08T23:51:27.485171351Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:27.488971 env[1139]: time="2024-02-08T23:51:27.488897931Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:27.492645 env[1139]: time="2024-02-08T23:51:27.492587237Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:27.496562 env[1139]: time="2024-02-08T23:51:27.496500625Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 8 23:51:27.501406 env[1139]: time="2024-02-08T23:51:27.495187546Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:27.518701 env[1139]: time="2024-02-08T23:51:27.518640558Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 8 23:51:28.097670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3554764144.mount: Deactivated successfully. Feb 8 23:51:28.114347 env[1139]: time="2024-02-08T23:51:28.114115360Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:28.118477 env[1139]: time="2024-02-08T23:51:28.117410207Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:28.120860 env[1139]: time="2024-02-08T23:51:28.120793686Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:28.123987 env[1139]: time="2024-02-08T23:51:28.123926090Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:28.125749 env[1139]: time="2024-02-08T23:51:28.125690260Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 8 23:51:28.148246 env[1139]: time="2024-02-08T23:51:28.148179040Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 8 23:51:29.292020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1146019101.mount: Deactivated successfully. Feb 8 23:51:34.292871 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 8 23:51:34.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:34.294275 systemd[1]: Stopped kubelet.service. Feb 8 23:51:34.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:34.301481 kernel: audit: type=1130 audit(1707436294.293:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:34.301619 kernel: audit: type=1131 audit(1707436294.297:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:34.303471 systemd[1]: Started kubelet.service. Feb 8 23:51:34.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:34.309348 kernel: audit: type=1130 audit(1707436294.302:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:34.402017 kubelet[1576]: E0208 23:51:34.401064 1576 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:51:34.404346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:51:34.404500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:51:34.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 8 23:51:34.409327 kernel: audit: type=1131 audit(1707436294.403:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 8 23:51:36.366706 env[1139]: time="2024-02-08T23:51:36.366653419Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:36.369451 env[1139]: time="2024-02-08T23:51:36.369428671Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:36.372770 env[1139]: time="2024-02-08T23:51:36.372702541Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:36.377808 env[1139]: time="2024-02-08T23:51:36.377742413Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:36.379194 env[1139]: time="2024-02-08T23:51:36.379141278Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 8 23:51:36.394270 env[1139]: time="2024-02-08T23:51:36.394207792Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 8 23:51:37.312675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3805580744.mount: Deactivated successfully. Feb 8 23:51:38.320389 env[1139]: time="2024-02-08T23:51:38.320274094Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:38.322817 env[1139]: time="2024-02-08T23:51:38.322760333Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:38.325188 env[1139]: time="2024-02-08T23:51:38.325137611Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:38.327575 env[1139]: time="2024-02-08T23:51:38.327543937Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:38.329131 env[1139]: time="2024-02-08T23:51:38.329063692Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 8 23:51:40.806848 update_engine[1129]: I0208 23:51:40.806712 1129 update_attempter.cc:509] Updating boot flags... Feb 8 23:51:43.073270 systemd[1]: Stopped kubelet.service. Feb 8 23:51:43.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:43.084326 kernel: audit: type=1130 audit(1707436303.074:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:43.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:43.094372 kernel: audit: type=1131 audit(1707436303.077:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:43.107596 systemd[1]: Reloading. Feb 8 23:51:43.202276 /usr/lib/systemd/system-generators/torcx-generator[1685]: time="2024-02-08T23:51:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:51:43.202324 /usr/lib/systemd/system-generators/torcx-generator[1685]: time="2024-02-08T23:51:43Z" level=info msg="torcx already run" Feb 8 23:51:43.294622 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:51:43.294644 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:51:43.322678 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:51:43.415017 systemd[1]: Started kubelet.service. Feb 8 23:51:43.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:43.426073 kernel: audit: type=1130 audit(1707436303.414:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:43.503841 kubelet[1738]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:51:43.504175 kubelet[1738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:51:43.504323 kubelet[1738]: I0208 23:51:43.504271 1738 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:51:43.505828 kubelet[1738]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:51:43.505900 kubelet[1738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:51:44.501642 kubelet[1738]: I0208 23:51:44.501619 1738 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:51:44.501787 kubelet[1738]: I0208 23:51:44.501777 1738 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:51:44.502055 kubelet[1738]: I0208 23:51:44.502042 1738 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:51:44.507788 kubelet[1738]: I0208 23:51:44.507769 1738 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:51:44.509020 kubelet[1738]: E0208 23:51:44.509004 1738 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:44.509145 kubelet[1738]: I0208 23:51:44.509133 1738 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:51:44.509263 kubelet[1738]: I0208 23:51:44.509152 1738 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:51:44.509469 kubelet[1738]: I0208 23:51:44.509419 1738 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:51:44.509584 kubelet[1738]: I0208 23:51:44.509480 1738 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:51:44.509584 kubelet[1738]: I0208 23:51:44.509509 1738 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:51:44.509713 kubelet[1738]: I0208 23:51:44.509683 1738 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:51:44.518844 kubelet[1738]: I0208 23:51:44.518825 1738 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:51:44.518982 kubelet[1738]: I0208 23:51:44.518971 1738 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:51:44.519061 kubelet[1738]: I0208 23:51:44.519051 1738 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:51:44.519130 kubelet[1738]: I0208 23:51:44.519120 1738 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:51:44.520224 kubelet[1738]: I0208 23:51:44.520194 1738 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:51:44.520685 kubelet[1738]: W0208 23:51:44.520655 1738 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:51:44.521441 kubelet[1738]: I0208 23:51:44.521413 1738 server.go:1186] "Started kubelet" Feb 8 23:51:44.521679 kubelet[1738]: W0208 23:51:44.521609 1738 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-a-bd3a159777.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:44.521724 kubelet[1738]: E0208 23:51:44.521707 1738 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-a-bd3a159777.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:44.525177 kubelet[1738]: W0208 23:51:44.525120 1738 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.64:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:44.523000 audit[1738]: AVC avc: denied { mac_admin } for pid=1738 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:51:44.527741 kubelet[1738]: E0208 23:51:44.527712 1738 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.64:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:44.527890 kubelet[1738]: I0208 23:51:44.525364 1738 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 8 23:51:44.528104 kubelet[1738]: I0208 23:51:44.528078 1738 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 8 23:51:44.523000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:51:44.529142 kubelet[1738]: I0208 23:51:44.529114 1738 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:51:44.530820 kernel: audit: type=1400 audit(1707436304.523:189): avc: denied { mac_admin } for pid=1738 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:51:44.530942 kernel: audit: type=1401 audit(1707436304.523:189): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:51:44.530996 kernel: audit: type=1300 audit(1707436304.523:189): arch=c000003e syscall=188 success=no exit=-22 a0=c0009fd2c0 a1=c00018d860 a2=c0009fd290 a3=25 items=0 ppid=1 pid=1738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.523000 audit[1738]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009fd2c0 a1=c00018d860 a2=c0009fd290 a3=25 items=0 ppid=1 pid=1738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.532578 kubelet[1738]: I0208 23:51:44.525431 1738 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:51:44.534060 kubelet[1738]: I0208 23:51:44.534025 1738 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:51:44.523000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:51:44.539131 kubelet[1738]: E0208 23:51:44.526225 1738 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:51:44.539637 kubelet[1738]: E0208 23:51:44.539612 1738 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:51:44.539797 kubelet[1738]: E0208 23:51:44.526457 1738 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-a-bd3a159777.novalocal.17b20845c79a19d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-a-bd3a159777.novalocal", UID:"ci-3510-3-2-a-bd3a159777.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-a-bd3a159777.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 521370072, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 521370072, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.24.4.64:6443/api/v1/namespaces/default/events": dial tcp 172.24.4.64:6443: connect: connection refused'(may retry after sleeping) Feb 8 23:51:44.523000 audit[1738]: AVC avc: denied { mac_admin } for pid=1738 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:51:44.545776 kernel: audit: type=1327 audit(1707436304.523:189): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:51:44.545934 kernel: audit: type=1400 audit(1707436304.523:190): avc: denied { mac_admin } for pid=1738 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:51:44.546264 kubelet[1738]: I0208 23:51:44.546236 1738 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:51:44.523000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:51:44.547284 kubelet[1738]: I0208 23:51:44.547251 1738 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:51:44.523000 audit[1738]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0006e1580 a1=c0006df410 a2=c000e85b60 a3=25 items=0 ppid=1 pid=1738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.549052 kubelet[1738]: W0208 23:51:44.548982 1738 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:44.549244 kubelet[1738]: E0208 23:51:44.549217 1738 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:44.549595 kubelet[1738]: E0208 23:51:44.549555 1738 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.24.4.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-a-bd3a159777.novalocal?timeout=10s": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:44.553743 kernel: audit: type=1401 audit(1707436304.523:190): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:51:44.553904 kernel: audit: type=1300 audit(1707436304.523:190): arch=c000003e syscall=188 success=no exit=-22 a0=c0006e1580 a1=c0006df410 a2=c000e85b60 a3=25 items=0 ppid=1 pid=1738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.523000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:51:44.545000 audit[1748]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1748 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.545000 audit[1748]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe0a15eb50 a2=0 a3=7ffe0a15eb3c items=0 ppid=1738 pid=1748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.545000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 8 23:51:44.553000 audit[1749]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1749 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.553000 audit[1749]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc833f95c0 a2=0 a3=7ffc833f95ac items=0 ppid=1738 pid=1749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.553000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 8 23:51:44.557000 audit[1751]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1751 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.557000 audit[1751]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc8ed68980 a2=0 a3=7ffc8ed6896c items=0 ppid=1738 pid=1751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.557000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 8 23:51:44.562000 audit[1753]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1753 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.562000 audit[1753]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffee12ebc50 a2=0 a3=7ffee12ebc3c items=0 ppid=1738 pid=1753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.562000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 8 23:51:44.588000 audit[1760]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1760 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.588000 audit[1760]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffdf58dd000 a2=0 a3=7ffdf58dcfec items=0 ppid=1738 pid=1760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 8 23:51:44.595000 audit[1761]: NETFILTER_CFG table=nat:31 family=2 entries=1 op=nft_register_chain pid=1761 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.595000 audit[1761]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe28c29160 a2=0 a3=7ffe28c2914c items=0 ppid=1738 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.595000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 8 23:51:44.604000 audit[1764]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_rule pid=1764 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.604000 audit[1764]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffeb382a690 a2=0 a3=7ffeb382a67c items=0 ppid=1738 pid=1764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.604000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 8 23:51:44.610000 audit[1768]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1768 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.610000 audit[1768]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffd12eb0020 a2=0 a3=7ffd12eb000c items=0 ppid=1738 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.610000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 8 23:51:44.611000 audit[1769]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1769 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.611000 audit[1769]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffda4fc03a0 a2=0 a3=7ffda4fc038c items=0 ppid=1738 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.611000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 8 23:51:44.612000 audit[1770]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1770 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.612000 audit[1770]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff717b700 a2=0 a3=7ffff717b6ec items=0 ppid=1738 pid=1770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.612000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 8 23:51:44.614000 audit[1772]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=1772 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.614000 audit[1772]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe08767100 a2=0 a3=7ffe087670ec items=0 ppid=1738 pid=1772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.614000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 8 23:51:44.618695 kubelet[1738]: I0208 23:51:44.618653 1738 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:51:44.618695 kubelet[1738]: I0208 23:51:44.618677 1738 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:51:44.618695 kubelet[1738]: I0208 23:51:44.618692 1738 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:51:44.619000 audit[1774]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_rule pid=1774 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.619000 audit[1774]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffffa92d120 a2=0 a3=7ffffa92d10c items=0 ppid=1738 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 8 23:51:44.621000 audit[1776]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=1776 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.621000 audit[1776]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffe9c469020 a2=0 a3=7ffe9c46900c items=0 ppid=1738 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.621000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 8 23:51:44.623000 audit[1778]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_rule pid=1778 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.623000 audit[1778]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffc8c0fd9a0 a2=0 a3=7ffc8c0fd98c items=0 ppid=1738 pid=1778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.623000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 8 23:51:44.626519 kubelet[1738]: I0208 23:51:44.626493 1738 policy_none.go:49] "None policy: Start" Feb 8 23:51:44.627141 kubelet[1738]: I0208 23:51:44.627105 1738 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:51:44.627141 kubelet[1738]: I0208 23:51:44.627128 1738 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:51:44.626000 audit[1780]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=1780 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.626000 audit[1780]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffe891dd300 a2=0 a3=7ffe891dd2ec items=0 ppid=1738 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.626000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 8 23:51:44.627000 audit[1781]: NETFILTER_CFG table=mangle:41 family=10 entries=2 op=nft_register_chain pid=1781 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:51:44.627000 audit[1781]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffec3083bc0 a2=0 a3=7ffec3083bac items=0 ppid=1738 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.627000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 8 23:51:44.629684 kubelet[1738]: I0208 23:51:44.628424 1738 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:51:44.629000 audit[1782]: NETFILTER_CFG table=mangle:42 family=2 entries=1 op=nft_register_chain pid=1782 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.629000 audit[1782]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6fe72500 a2=0 a3=7ffc6fe724ec items=0 ppid=1738 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.629000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 8 23:51:44.633000 audit[1783]: NETFILTER_CFG table=nat:43 family=10 entries=2 op=nft_register_chain pid=1783 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:51:44.633000 audit[1783]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe4ffb9e90 a2=0 a3=7ffe4ffb9e7c items=0 ppid=1738 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.633000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 8 23:51:44.635440 kubelet[1738]: I0208 23:51:44.635426 1738 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:51:44.634000 audit[1738]: AVC avc: denied { mac_admin } for pid=1738 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:51:44.634000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:51:44.634000 audit[1738]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008bbf20 a1=c00018d218 a2=c0008bbec0 a3=25 items=0 ppid=1 pid=1738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.634000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:51:44.635807 kubelet[1738]: I0208 23:51:44.635794 1738 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 8 23:51:44.636011 kubelet[1738]: I0208 23:51:44.635998 1738 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:51:44.636000 audit[1784]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.636000 audit[1784]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed5a4a3b0 a2=0 a3=7ffed5a4a39c items=0 ppid=1738 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.636000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 8 23:51:44.639000 audit[1786]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=1786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:51:44.639000 audit[1786]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc3154760 a2=0 a3=7fffc315474c items=0 ppid=1738 pid=1786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.639000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 8 23:51:44.641000 audit[1787]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_rule pid=1787 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:51:44.641000 audit[1787]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc1198a590 a2=0 a3=7ffc1198a57c items=0 ppid=1738 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.641000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 8 23:51:44.642000 audit[1788]: NETFILTER_CFG table=filter:47 family=10 entries=2 op=nft_register_chain pid=1788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:51:44.642000 audit[1788]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7fffb755a460 a2=0 a3=7fffb755a44c items=0 ppid=1738 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.642000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 8 23:51:44.645000 audit[1790]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_rule pid=1790 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:51:44.645000 audit[1790]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffdce001280 a2=0 a3=7ffdce00126c items=0 ppid=1738 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.645000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 8 23:51:44.646000 audit[1791]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=1791 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:51:44.646000 audit[1791]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffefbc544b0 a2=0 a3=7ffefbc5449c items=0 ppid=1738 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.646000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 8 23:51:44.647000 audit[1792]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=1792 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:51:44.647000 audit[1792]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3380b190 a2=0 a3=7ffd3380b17c items=0 ppid=1738 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.647000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 8 23:51:44.649000 audit[1794]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_rule pid=1794 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:51:44.649000 audit[1794]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd2503a900 a2=0 a3=7ffd2503a8ec items=0 ppid=1738 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.649000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 8 23:51:44.651000 audit[1796]: NETFILTER_CFG table=nat:52 family=10 entries=2 op=nft_register_chain pid=1796 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:51:44.651000 audit[1796]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffeb1c4c8d0 a2=0 a3=7ffeb1c4c8bc items=0 ppid=1738 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.651000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 8 23:51:44.655172 kubelet[1738]: E0208 23:51:44.655130 1738 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-2-a-bd3a159777.novalocal\" not found" Feb 8 23:51:44.654000 audit[1798]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_rule pid=1798 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:51:44.656201 kubelet[1738]: I0208 23:51:44.656101 1738 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:44.654000 audit[1798]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffc8e1846a0 a2=0 a3=7ffc8e18468c items=0 ppid=1738 pid=1798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.654000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 8 23:51:44.656521 kubelet[1738]: E0208 23:51:44.656508 1738 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.64:6443/api/v1/nodes\": dial tcp 172.24.4.64:6443: connect: connection refused" node="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:44.657000 audit[1800]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_rule pid=1800 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:51:44.657000 audit[1800]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffeefbd5900 a2=0 a3=7ffeefbd58ec items=0 ppid=1738 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.657000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 8 23:51:44.660000 audit[1802]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=1802 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:51:44.660000 audit[1802]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffe43ac8a70 a2=0 a3=7ffe43ac8a5c items=0 ppid=1738 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.660000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 8 23:51:44.663009 kubelet[1738]: I0208 23:51:44.662986 1738 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:51:44.663075 kubelet[1738]: I0208 23:51:44.663013 1738 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:51:44.663075 kubelet[1738]: I0208 23:51:44.663039 1738 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:51:44.663136 kubelet[1738]: E0208 23:51:44.663080 1738 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 8 23:51:44.663622 kubelet[1738]: W0208 23:51:44.663585 1738 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:44.663707 kubelet[1738]: E0208 23:51:44.663696 1738 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:44.663000 audit[1803]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=1803 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:51:44.663000 audit[1803]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2c437990 a2=0 a3=7fff2c43797c items=0 ppid=1738 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.663000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 8 23:51:44.664000 audit[1804]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=1804 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:51:44.664000 audit[1804]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9e5a2a50 a2=0 a3=7fff9e5a2a3c items=0 ppid=1738 pid=1804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.664000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 8 23:51:44.665000 audit[1805]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1805 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:51:44.665000 audit[1805]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce1f42710 a2=0 a3=7ffce1f426fc items=0 ppid=1738 pid=1805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:44.665000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 8 23:51:44.750581 kubelet[1738]: E0208 23:51:44.750553 1738 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.24.4.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-a-bd3a159777.novalocal?timeout=10s": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:44.767764 kubelet[1738]: I0208 23:51:44.764147 1738 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:51:44.769594 kubelet[1738]: I0208 23:51:44.769580 1738 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:51:44.771563 kubelet[1738]: I0208 23:51:44.771523 1738 status_manager.go:698] "Failed to get status for pod" podUID=6e565f677c9e93973cea69c5edaf54b6 pod="kube-system/kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal" err="Get \"https://172.24.4.64:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal\": dial tcp 172.24.4.64:6443: connect: connection refused" Feb 8 23:51:44.771893 kubelet[1738]: I0208 23:51:44.771856 1738 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:51:44.777280 kubelet[1738]: I0208 23:51:44.777263 1738 status_manager.go:698] "Failed to get status for pod" podUID=d8ca20db586c41c09dbdc7b2cf2210ae pod="kube-system/kube-scheduler-ci-3510-3-2-a-bd3a159777.novalocal" err="Get \"https://172.24.4.64:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510-3-2-a-bd3a159777.novalocal\": dial tcp 172.24.4.64:6443: connect: connection refused" Feb 8 23:51:44.777646 kubelet[1738]: I0208 23:51:44.777631 1738 status_manager.go:698] "Failed to get status for pod" podUID=e7fe93a4f1b488b60ac041421aa8874d pod="kube-system/kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal" err="Get \"https://172.24.4.64:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal\": dial tcp 172.24.4.64:6443: connect: connection refused" Feb 8 23:51:44.855933 kubelet[1738]: I0208 23:51:44.855830 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7fe93a4f1b488b60ac041421aa8874d-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"e7fe93a4f1b488b60ac041421aa8874d\") " pod="kube-system/kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:44.856158 kubelet[1738]: I0208 23:51:44.856073 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7fe93a4f1b488b60ac041421aa8874d-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"e7fe93a4f1b488b60ac041421aa8874d\") " pod="kube-system/kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:44.856454 kubelet[1738]: I0208 23:51:44.856385 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6e565f677c9e93973cea69c5edaf54b6-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"6e565f677c9e93973cea69c5edaf54b6\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:44.856679 kubelet[1738]: I0208 23:51:44.856604 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d8ca20db586c41c09dbdc7b2cf2210ae-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"d8ca20db586c41c09dbdc7b2cf2210ae\") " pod="kube-system/kube-scheduler-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:44.856798 kubelet[1738]: I0208 23:51:44.856764 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e565f677c9e93973cea69c5edaf54b6-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"6e565f677c9e93973cea69c5edaf54b6\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:44.857114 kubelet[1738]: I0208 23:51:44.856996 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e565f677c9e93973cea69c5edaf54b6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"6e565f677c9e93973cea69c5edaf54b6\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:44.857290 kubelet[1738]: I0208 23:51:44.857216 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7fe93a4f1b488b60ac041421aa8874d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"e7fe93a4f1b488b60ac041421aa8874d\") " pod="kube-system/kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:44.858027 kubelet[1738]: I0208 23:51:44.857494 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e565f677c9e93973cea69c5edaf54b6-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"6e565f677c9e93973cea69c5edaf54b6\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:44.858027 kubelet[1738]: I0208 23:51:44.857710 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e565f677c9e93973cea69c5edaf54b6-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"6e565f677c9e93973cea69c5edaf54b6\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:44.860133 kubelet[1738]: I0208 23:51:44.860063 1738 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:44.860894 kubelet[1738]: E0208 23:51:44.860863 1738 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.64:6443/api/v1/nodes\": dial tcp 172.24.4.64:6443: connect: connection refused" node="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:45.076750 env[1139]: time="2024-02-08T23:51:45.076648106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal,Uid:6e565f677c9e93973cea69c5edaf54b6,Namespace:kube-system,Attempt:0,}" Feb 8 23:51:45.082981 env[1139]: time="2024-02-08T23:51:45.082379423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-a-bd3a159777.novalocal,Uid:d8ca20db586c41c09dbdc7b2cf2210ae,Namespace:kube-system,Attempt:0,}" Feb 8 23:51:45.084056 env[1139]: time="2024-02-08T23:51:45.083682772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal,Uid:e7fe93a4f1b488b60ac041421aa8874d,Namespace:kube-system,Attempt:0,}" Feb 8 23:51:45.152072 kubelet[1738]: E0208 23:51:45.151960 1738 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.24.4.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-a-bd3a159777.novalocal?timeout=10s": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:45.264087 kubelet[1738]: I0208 23:51:45.264002 1738 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:45.264739 kubelet[1738]: E0208 23:51:45.264707 1738 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.64:6443/api/v1/nodes\": dial tcp 172.24.4.64:6443: connect: connection refused" node="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:45.491066 kubelet[1738]: W0208 23:51:45.490375 1738 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-a-bd3a159777.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:45.491066 kubelet[1738]: E0208 23:51:45.490486 1738 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-a-bd3a159777.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:45.546868 kubelet[1738]: W0208 23:51:45.546701 1738 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.64:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:45.546868 kubelet[1738]: E0208 23:51:45.546816 1738 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.64:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:45.666442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount66105529.mount: Deactivated successfully. Feb 8 23:51:45.679223 env[1139]: time="2024-02-08T23:51:45.679125539Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:45.691688 env[1139]: time="2024-02-08T23:51:45.691613910Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:45.695665 env[1139]: time="2024-02-08T23:51:45.695611882Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:45.697689 env[1139]: time="2024-02-08T23:51:45.697601820Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:45.699646 env[1139]: time="2024-02-08T23:51:45.699598531Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:45.702145 env[1139]: time="2024-02-08T23:51:45.702071944Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:45.708660 env[1139]: time="2024-02-08T23:51:45.708608678Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:45.715182 kubelet[1738]: W0208 23:51:45.715010 1738 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:45.715182 kubelet[1738]: E0208 23:51:45.715119 1738 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:45.717355 env[1139]: time="2024-02-08T23:51:45.717230529Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:45.720079 env[1139]: time="2024-02-08T23:51:45.720027326Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:45.722065 env[1139]: time="2024-02-08T23:51:45.722017464Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:45.723981 env[1139]: time="2024-02-08T23:51:45.723931360Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:45.758057 env[1139]: time="2024-02-08T23:51:45.756491073Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:51:45.781356 env[1139]: time="2024-02-08T23:51:45.779709120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:51:45.781356 env[1139]: time="2024-02-08T23:51:45.779809199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:51:45.781356 env[1139]: time="2024-02-08T23:51:45.779823138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:51:45.781356 env[1139]: time="2024-02-08T23:51:45.780459175Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ef687181ee3f5df269cb49853af6cb9b7d83124d296121f29ca856f09e5201f pid=1827 runtime=io.containerd.runc.v2 Feb 8 23:51:45.796326 env[1139]: time="2024-02-08T23:51:45.796213489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:51:45.796326 env[1139]: time="2024-02-08T23:51:45.796265192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:51:45.796611 env[1139]: time="2024-02-08T23:51:45.796569949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:51:45.796868 env[1139]: time="2024-02-08T23:51:45.796823695Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a0192a424f10b1104240ca07879b495965a5fc4eea1a7311ad1fd92f9863f86 pid=1815 runtime=io.containerd.runc.v2 Feb 8 23:51:45.953790 kubelet[1738]: E0208 23:51:45.953650 1738 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.24.4.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-a-bd3a159777.novalocal?timeout=10s": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:45.992967 env[1139]: time="2024-02-08T23:51:45.991668752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-a-bd3a159777.novalocal,Uid:d8ca20db586c41c09dbdc7b2cf2210ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ef687181ee3f5df269cb49853af6cb9b7d83124d296121f29ca856f09e5201f\"" Feb 8 23:51:45.995940 env[1139]: time="2024-02-08T23:51:45.995905279Z" level=info msg="CreateContainer within sandbox \"0ef687181ee3f5df269cb49853af6cb9b7d83124d296121f29ca856f09e5201f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 8 23:51:46.009446 env[1139]: time="2024-02-08T23:51:46.008895099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal,Uid:6e565f677c9e93973cea69c5edaf54b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a0192a424f10b1104240ca07879b495965a5fc4eea1a7311ad1fd92f9863f86\"" Feb 8 23:51:46.012053 env[1139]: time="2024-02-08T23:51:46.012007278Z" level=info msg="CreateContainer within sandbox \"7a0192a424f10b1104240ca07879b495965a5fc4eea1a7311ad1fd92f9863f86\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 8 23:51:46.066486 kubelet[1738]: W0208 23:51:46.066184 1738 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:46.066486 kubelet[1738]: E0208 23:51:46.066237 1738 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:46.070691 kubelet[1738]: I0208 23:51:46.070025 1738 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:46.071155 kubelet[1738]: E0208 23:51:46.071130 1738 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.64:6443/api/v1/nodes\": dial tcp 172.24.4.64:6443: connect: connection refused" node="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:46.077860 env[1139]: time="2024-02-08T23:51:46.077768230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:51:46.077860 env[1139]: time="2024-02-08T23:51:46.077835623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:51:46.077860 env[1139]: time="2024-02-08T23:51:46.077864441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:51:46.079036 env[1139]: time="2024-02-08T23:51:46.078935661Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7632b275aa16912018bc3be8a78c46839fd4ba530f4feee0923cf7a0a2f52c8d pid=1899 runtime=io.containerd.runc.v2 Feb 8 23:51:46.101092 env[1139]: time="2024-02-08T23:51:46.100509292Z" level=info msg="CreateContainer within sandbox \"0ef687181ee3f5df269cb49853af6cb9b7d83124d296121f29ca856f09e5201f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7908f15ce8151fd752ebccae828c5fe916ef9642c12d6fbfc247b7e87c62e398\"" Feb 8 23:51:46.102700 env[1139]: time="2024-02-08T23:51:46.102635390Z" level=info msg="StartContainer for \"7908f15ce8151fd752ebccae828c5fe916ef9642c12d6fbfc247b7e87c62e398\"" Feb 8 23:51:46.123791 env[1139]: time="2024-02-08T23:51:46.123681583Z" level=info msg="CreateContainer within sandbox \"7a0192a424f10b1104240ca07879b495965a5fc4eea1a7311ad1fd92f9863f86\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"636fb935e74e152837d90d38dc170b5237a4cce77d5b4533e871e5b446a4eb91\"" Feb 8 23:51:46.125868 env[1139]: time="2024-02-08T23:51:46.125809895Z" level=info msg="StartContainer for \"636fb935e74e152837d90d38dc170b5237a4cce77d5b4533e871e5b446a4eb91\"" Feb 8 23:51:46.186616 env[1139]: time="2024-02-08T23:51:46.186565362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal,Uid:e7fe93a4f1b488b60ac041421aa8874d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7632b275aa16912018bc3be8a78c46839fd4ba530f4feee0923cf7a0a2f52c8d\"" Feb 8 23:51:46.189570 env[1139]: time="2024-02-08T23:51:46.189501761Z" level=info msg="CreateContainer within sandbox \"7632b275aa16912018bc3be8a78c46839fd4ba530f4feee0923cf7a0a2f52c8d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 8 23:51:46.231622 env[1139]: time="2024-02-08T23:51:46.228651154Z" level=info msg="CreateContainer within sandbox \"7632b275aa16912018bc3be8a78c46839fd4ba530f4feee0923cf7a0a2f52c8d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3fe83d784859a437a6b45a6b175830e1124e327ab44aef3c407175b1b0c280af\"" Feb 8 23:51:46.232376 env[1139]: time="2024-02-08T23:51:46.232333787Z" level=info msg="StartContainer for \"3fe83d784859a437a6b45a6b175830e1124e327ab44aef3c407175b1b0c280af\"" Feb 8 23:51:46.247156 env[1139]: time="2024-02-08T23:51:46.247119599Z" level=info msg="StartContainer for \"7908f15ce8151fd752ebccae828c5fe916ef9642c12d6fbfc247b7e87c62e398\" returns successfully" Feb 8 23:51:46.289046 env[1139]: time="2024-02-08T23:51:46.288451371Z" level=info msg="StartContainer for \"636fb935e74e152837d90d38dc170b5237a4cce77d5b4533e871e5b446a4eb91\" returns successfully" Feb 8 23:51:46.362740 env[1139]: time="2024-02-08T23:51:46.362682877Z" level=info msg="StartContainer for \"3fe83d784859a437a6b45a6b175830e1124e327ab44aef3c407175b1b0c280af\" returns successfully" Feb 8 23:51:46.514965 kubelet[1738]: E0208 23:51:46.514930 1738 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:46.675609 kubelet[1738]: I0208 23:51:46.675383 1738 status_manager.go:698] "Failed to get status for pod" podUID=6e565f677c9e93973cea69c5edaf54b6 pod="kube-system/kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal" err="Get \"https://172.24.4.64:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal\": dial tcp 172.24.4.64:6443: connect: connection refused" Feb 8 23:51:46.680692 kubelet[1738]: I0208 23:51:46.680674 1738 status_manager.go:698] "Failed to get status for pod" podUID=e7fe93a4f1b488b60ac041421aa8874d pod="kube-system/kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal" err="Get \"https://172.24.4.64:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal\": dial tcp 172.24.4.64:6443: connect: connection refused" Feb 8 23:51:46.720209 kubelet[1738]: I0208 23:51:46.720187 1738 status_manager.go:698] "Failed to get status for pod" podUID=d8ca20db586c41c09dbdc7b2cf2210ae pod="kube-system/kube-scheduler-ci-3510-3-2-a-bd3a159777.novalocal" err="Get \"https://172.24.4.64:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510-3-2-a-bd3a159777.novalocal\": dial tcp 172.24.4.64:6443: connect: connection refused" Feb 8 23:51:47.554510 kubelet[1738]: E0208 23:51:47.554474 1738 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://172.24.4.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-a-bd3a159777.novalocal?timeout=10s": dial tcp 172.24.4.64:6443: connect: connection refused Feb 8 23:51:47.673608 kubelet[1738]: I0208 23:51:47.673590 1738 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:50.216149 kubelet[1738]: I0208 23:51:50.216109 1738 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:50.267171 kubelet[1738]: E0208 23:51:50.267070 1738 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-a-bd3a159777.novalocal.17b20845c79a19d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-a-bd3a159777.novalocal", UID:"ci-3510-3-2-a-bd3a159777.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-a-bd3a159777.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 521370072, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 521370072, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:51:50.320359 kubelet[1738]: E0208 23:51:50.320228 1738 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-a-bd3a159777.novalocal.17b20845c8aff956", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-a-bd3a159777.novalocal", UID:"ci-3510-3-2-a-bd3a159777.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-a-bd3a159777.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 539580758, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 539580758, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:51:50.378729 kubelet[1738]: E0208 23:51:50.378543 1738 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-a-bd3a159777.novalocal.17b20845cd5e8e21", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-a-bd3a159777.novalocal", UID:"ci-3510-3-2-a-bd3a159777.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-a-bd3a159777.novalocal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-a-bd3a159777.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 618130977, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 618130977, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:51:50.435949 kubelet[1738]: E0208 23:51:50.435765 1738 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-a-bd3a159777.novalocal.17b20845cd5ec0da", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-a-bd3a159777.novalocal", UID:"ci-3510-3-2-a-bd3a159777.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510-3-2-a-bd3a159777.novalocal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-a-bd3a159777.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 618143962, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 618143962, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:51:50.495379 kubelet[1738]: E0208 23:51:50.494037 1738 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-a-bd3a159777.novalocal.17b20845cd5eced4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-a-bd3a159777.novalocal", UID:"ci-3510-3-2-a-bd3a159777.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510-3-2-a-bd3a159777.novalocal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-a-bd3a159777.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 618147540, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 618147540, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:51:50.528882 kubelet[1738]: I0208 23:51:50.528813 1738 apiserver.go:52] "Watching apiserver" Feb 8 23:51:50.548203 kubelet[1738]: I0208 23:51:50.548152 1738 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:51:50.551679 kubelet[1738]: E0208 23:51:50.551479 1738 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-a-bd3a159777.novalocal.17b20845ce7ce597", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-a-bd3a159777.novalocal", UID:"ci-3510-3-2-a-bd3a159777.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-a-bd3a159777.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 636896663, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 636896663, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:51:50.601824 kubelet[1738]: I0208 23:51:50.601727 1738 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:51:50.614055 kubelet[1738]: E0208 23:51:50.613858 1738 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-a-bd3a159777.novalocal.17b20845cd5e8e21", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-a-bd3a159777.novalocal", UID:"ci-3510-3-2-a-bd3a159777.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-a-bd3a159777.novalocal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-a-bd3a159777.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 618130977, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 656052539, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:51:50.677542 kubelet[1738]: E0208 23:51:50.677394 1738 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-a-bd3a159777.novalocal.17b20845cd5ec0da", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-a-bd3a159777.novalocal", UID:"ci-3510-3-2-a-bd3a159777.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510-3-2-a-bd3a159777.novalocal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-a-bd3a159777.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 618143962, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 656063491, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:51:50.743781 kubelet[1738]: E0208 23:51:50.743511 1738 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-a-bd3a159777.novalocal.17b20845cd5eced4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-a-bd3a159777.novalocal", UID:"ci-3510-3-2-a-bd3a159777.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510-3-2-a-bd3a159777.novalocal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-a-bd3a159777.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 618147540, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 656067800, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:51:50.925183 kubelet[1738]: E0208 23:51:50.925012 1738 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-a-bd3a159777.novalocal.17b20845cd5e8e21", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-a-bd3a159777.novalocal", UID:"ci-3510-3-2-a-bd3a159777.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-a-bd3a159777.novalocal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-a-bd3a159777.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 618130977, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 769462510, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:51:51.323611 kubelet[1738]: E0208 23:51:51.323493 1738 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-a-bd3a159777.novalocal.17b20845cd5ec0da", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-a-bd3a159777.novalocal", UID:"ci-3510-3-2-a-bd3a159777.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510-3-2-a-bd3a159777.novalocal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-a-bd3a159777.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 618143962, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 769485426, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:51:51.722659 kubelet[1738]: E0208 23:51:51.722360 1738 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-a-bd3a159777.novalocal.17b20845cd5eced4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-a-bd3a159777.novalocal", UID:"ci-3510-3-2-a-bd3a159777.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510-3-2-a-bd3a159777.novalocal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-a-bd3a159777.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 618147540, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 51, 44, 769527490, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:51:53.115285 systemd[1]: Reloading. Feb 8 23:51:53.248714 /usr/lib/systemd/system-generators/torcx-generator[2059]: time="2024-02-08T23:51:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:51:53.249152 /usr/lib/systemd/system-generators/torcx-generator[2059]: time="2024-02-08T23:51:53Z" level=info msg="torcx already run" Feb 8 23:51:53.354935 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:51:53.355144 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:51:53.384424 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:51:53.532838 kubelet[1738]: I0208 23:51:53.532784 1738 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:51:53.533664 systemd[1]: Stopping kubelet.service... Feb 8 23:51:53.549775 systemd[1]: kubelet.service: Deactivated successfully. Feb 8 23:51:53.550240 systemd[1]: Stopped kubelet.service. Feb 8 23:51:53.557218 kernel: kauditd_printk_skb: 104 callbacks suppressed Feb 8 23:51:53.557338 kernel: audit: type=1131 audit(1707436313.548:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:53.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:53.552224 systemd[1]: Started kubelet.service. Feb 8 23:51:53.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:53.566488 kernel: audit: type=1130 audit(1707436313.551:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:53.653256 kubelet[2113]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:51:53.653793 kubelet[2113]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:51:53.656038 kubelet[2113]: I0208 23:51:53.655980 2113 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:51:53.659277 kubelet[2113]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:51:53.659372 kubelet[2113]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:51:53.663207 kubelet[2113]: I0208 23:51:53.663191 2113 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:51:53.663324 kubelet[2113]: I0208 23:51:53.663291 2113 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:51:53.663621 kubelet[2113]: I0208 23:51:53.663606 2113 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:51:53.665008 kubelet[2113]: I0208 23:51:53.664993 2113 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 8 23:51:53.666156 kubelet[2113]: I0208 23:51:53.666143 2113 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:51:53.669019 kubelet[2113]: I0208 23:51:53.669004 2113 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:51:53.669494 kubelet[2113]: I0208 23:51:53.669483 2113 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:51:53.669625 kubelet[2113]: I0208 23:51:53.669614 2113 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:51:53.669816 kubelet[2113]: I0208 23:51:53.669777 2113 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:51:53.669864 kubelet[2113]: I0208 23:51:53.669819 2113 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:51:53.669864 kubelet[2113]: I0208 23:51:53.669863 2113 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:51:53.673633 kubelet[2113]: I0208 23:51:53.673618 2113 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:51:53.673729 kubelet[2113]: I0208 23:51:53.673718 2113 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:51:53.673813 kubelet[2113]: I0208 23:51:53.673803 2113 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:51:53.673890 kubelet[2113]: I0208 23:51:53.673879 2113 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:51:53.683517 kubelet[2113]: I0208 23:51:53.683469 2113 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:51:53.687417 kubelet[2113]: I0208 23:51:53.684023 2113 server.go:1186] "Started kubelet" Feb 8 23:51:53.706742 kernel: audit: type=1400 audit(1707436313.694:227): avc: denied { mac_admin } for pid=2113 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:51:53.706839 kernel: audit: type=1401 audit(1707436313.694:227): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:51:53.694000 audit[2113]: AVC avc: denied { mac_admin } for pid=2113 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:51:53.694000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:51:53.706978 kubelet[2113]: I0208 23:51:53.695655 2113 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 8 23:51:53.706978 kubelet[2113]: I0208 23:51:53.695703 2113 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 8 23:51:53.706978 kubelet[2113]: I0208 23:51:53.695729 2113 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:51:53.706978 kubelet[2113]: I0208 23:51:53.696229 2113 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:51:53.706978 kubelet[2113]: I0208 23:51:53.697369 2113 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:51:53.694000 audit[2113]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e64180 a1=c000e66168 a2=c000e64150 a3=25 items=0 ppid=1 pid=2113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:53.713345 kernel: audit: type=1300 audit(1707436313.694:227): arch=c000003e syscall=188 success=no exit=-22 a0=c000e64180 a1=c000e66168 a2=c000e64150 a3=25 items=0 ppid=1 pid=2113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:53.694000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:51:53.723612 kubelet[2113]: I0208 23:51:53.723584 2113 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:51:53.694000 audit[2113]: AVC avc: denied { mac_admin } for pid=2113 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:51:53.727479 kernel: audit: type=1327 audit(1707436313.694:227): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:51:53.727555 kernel: audit: type=1400 audit(1707436313.694:228): avc: denied { mac_admin } for pid=2113 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:51:53.727583 kernel: audit: type=1401 audit(1707436313.694:228): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:51:53.694000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:51:53.694000 audit[2113]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e21aa0 a1=c000e66180 a2=c000e64210 a3=25 items=0 ppid=1 pid=2113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:53.735032 kernel: audit: type=1300 audit(1707436313.694:228): arch=c000003e syscall=188 success=no exit=-22 a0=c000e21aa0 a1=c000e66180 a2=c000e64210 a3=25 items=0 ppid=1 pid=2113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:53.740522 kubelet[2113]: I0208 23:51:53.723698 2113 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:51:53.694000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:51:53.758357 kernel: audit: type=1327 audit(1707436313.694:228): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:51:53.767713 kubelet[2113]: E0208 23:51:53.767685 2113 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:51:53.767851 kubelet[2113]: E0208 23:51:53.767729 2113 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:51:53.826572 kubelet[2113]: I0208 23:51:53.826547 2113 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:53.848637 kubelet[2113]: I0208 23:51:53.845098 2113 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:53.848637 kubelet[2113]: I0208 23:51:53.845182 2113 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:53.884028 kubelet[2113]: I0208 23:51:53.883998 2113 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:51:53.897450 kubelet[2113]: I0208 23:51:53.897427 2113 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:51:53.897718 kubelet[2113]: I0208 23:51:53.897705 2113 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:51:53.897816 kubelet[2113]: I0208 23:51:53.897805 2113 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:51:53.898078 kubelet[2113]: I0208 23:51:53.898067 2113 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 8 23:51:53.898180 kubelet[2113]: I0208 23:51:53.898170 2113 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 8 23:51:53.898262 kubelet[2113]: I0208 23:51:53.898252 2113 policy_none.go:49] "None policy: Start" Feb 8 23:51:53.899400 kubelet[2113]: I0208 23:51:53.899349 2113 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:51:53.899400 kubelet[2113]: I0208 23:51:53.899378 2113 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:51:53.899534 kubelet[2113]: I0208 23:51:53.899513 2113 state_mem.go:75] "Updated machine memory state" Feb 8 23:51:53.900761 kubelet[2113]: I0208 23:51:53.900735 2113 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:51:53.899000 audit[2113]: AVC avc: denied { mac_admin } for pid=2113 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:51:53.899000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:51:53.899000 audit[2113]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008eec00 a1=c001795158 a2=c0008eebd0 a3=25 items=0 ppid=1 pid=2113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:51:53.899000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:51:53.908547 kubelet[2113]: I0208 23:51:53.902863 2113 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 8 23:51:53.908547 kubelet[2113]: I0208 23:51:53.903292 2113 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:51:53.936267 kubelet[2113]: I0208 23:51:53.936151 2113 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:51:53.936267 kubelet[2113]: I0208 23:51:53.936177 2113 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:51:53.936267 kubelet[2113]: I0208 23:51:53.936194 2113 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:51:53.936267 kubelet[2113]: E0208 23:51:53.936233 2113 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 8 23:51:54.036511 kubelet[2113]: I0208 23:51:54.036471 2113 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:51:54.036885 kubelet[2113]: I0208 23:51:54.036865 2113 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:51:54.037071 kubelet[2113]: I0208 23:51:54.037049 2113 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:51:54.042484 kubelet[2113]: I0208 23:51:54.042453 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6e565f677c9e93973cea69c5edaf54b6-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"6e565f677c9e93973cea69c5edaf54b6\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:54.042745 kubelet[2113]: I0208 23:51:54.042725 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e565f677c9e93973cea69c5edaf54b6-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"6e565f677c9e93973cea69c5edaf54b6\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:54.042940 kubelet[2113]: I0208 23:51:54.042910 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d8ca20db586c41c09dbdc7b2cf2210ae-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"d8ca20db586c41c09dbdc7b2cf2210ae\") " pod="kube-system/kube-scheduler-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:54.043139 kubelet[2113]: I0208 23:51:54.043120 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7fe93a4f1b488b60ac041421aa8874d-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"e7fe93a4f1b488b60ac041421aa8874d\") " pod="kube-system/kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:54.043395 kubelet[2113]: I0208 23:51:54.043377 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7fe93a4f1b488b60ac041421aa8874d-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"e7fe93a4f1b488b60ac041421aa8874d\") " pod="kube-system/kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:54.043575 kubelet[2113]: I0208 23:51:54.043556 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7fe93a4f1b488b60ac041421aa8874d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"e7fe93a4f1b488b60ac041421aa8874d\") " pod="kube-system/kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:54.043738 kubelet[2113]: I0208 23:51:54.043723 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e565f677c9e93973cea69c5edaf54b6-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"6e565f677c9e93973cea69c5edaf54b6\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:54.043979 kubelet[2113]: I0208 23:51:54.043961 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e565f677c9e93973cea69c5edaf54b6-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"6e565f677c9e93973cea69c5edaf54b6\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:54.044188 kubelet[2113]: I0208 23:51:54.044171 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e565f677c9e93973cea69c5edaf54b6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal\" (UID: \"6e565f677c9e93973cea69c5edaf54b6\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:54.049708 kubelet[2113]: E0208 23:51:54.049670 2113 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:54.679620 kubelet[2113]: I0208 23:51:54.679561 2113 apiserver.go:52] "Watching apiserver" Feb 8 23:51:54.737868 kubelet[2113]: I0208 23:51:54.737811 2113 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:51:54.749553 kubelet[2113]: I0208 23:51:54.749510 2113 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:51:55.100688 kubelet[2113]: E0208 23:51:55.100642 2113 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:55.283581 kubelet[2113]: E0208 23:51:55.283555 2113 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-2-a-bd3a159777.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:55.517870 kubelet[2113]: E0208 23:51:55.517739 2113 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:51:55.724300 kubelet[2113]: I0208 23:51:55.724250 2113 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-2-a-bd3a159777.novalocal" podStartSLOduration=1.722865146 pod.CreationTimestamp="2024-02-08 23:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:51:55.722348797 +0000 UTC m=+2.154965755" watchObservedRunningTime="2024-02-08 23:51:55.722865146 +0000 UTC m=+2.155482084" Feb 8 23:51:56.489281 kubelet[2113]: I0208 23:51:56.489248 2113 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-2-a-bd3a159777.novalocal" podStartSLOduration=4.489182976 pod.CreationTimestamp="2024-02-08 23:51:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:51:56.128265559 +0000 UTC m=+2.560882497" watchObservedRunningTime="2024-02-08 23:51:56.489182976 +0000 UTC m=+2.921799914" Feb 8 23:51:57.071704 kubelet[2113]: I0208 23:51:57.071665 2113 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-2-a-bd3a159777.novalocal" podStartSLOduration=3.071610747 pod.CreationTimestamp="2024-02-08 23:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:51:56.48981102 +0000 UTC m=+2.922427958" watchObservedRunningTime="2024-02-08 23:51:57.071610747 +0000 UTC m=+3.504227695" Feb 8 23:51:58.862733 sudo[1310]: pam_unix(sudo:session): session closed for user root Feb 8 23:51:58.861000 audit[1310]: USER_END pid=1310 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:51:58.864401 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 8 23:51:58.864461 kernel: audit: type=1106 audit(1707436318.861:230): pid=1310 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:51:58.862000 audit[1310]: CRED_DISP pid=1310 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:51:58.873321 kernel: audit: type=1104 audit(1707436318.862:231): pid=1310 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:51:59.127153 sshd[1304]: pam_unix(sshd:session): session closed for user core Feb 8 23:51:59.129000 audit[1304]: USER_END pid=1304 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:51:59.140367 kernel: audit: type=1106 audit(1707436319.129:232): pid=1304 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:51:59.129000 audit[1304]: CRED_DISP pid=1304 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:51:59.151399 kernel: audit: type=1104 audit(1707436319.129:233): pid=1304 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:51:59.154230 systemd[1]: sshd@6-172.24.4.64:22-172.24.4.1:47280.service: Deactivated successfully. Feb 8 23:51:59.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.64:22-172.24.4.1:47280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:59.167364 kernel: audit: type=1131 audit(1707436319.153:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.64:22-172.24.4.1:47280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:51:59.166027 systemd[1]: session-7.scope: Deactivated successfully. Feb 8 23:51:59.166589 systemd-logind[1126]: Session 7 logged out. Waiting for processes to exit. Feb 8 23:51:59.174960 systemd-logind[1126]: Removed session 7. Feb 8 23:52:06.008498 kubelet[2113]: I0208 23:52:06.008464 2113 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 8 23:52:06.009212 env[1139]: time="2024-02-08T23:52:06.009162345Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:52:06.009763 kubelet[2113]: I0208 23:52:06.009740 2113 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 8 23:52:06.572579 kubelet[2113]: I0208 23:52:06.572505 2113 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:52:06.633974 kubelet[2113]: I0208 23:52:06.633950 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8903eb0e-0971-4259-b29d-106005feb808-xtables-lock\") pod \"kube-proxy-5ww2p\" (UID: \"8903eb0e-0971-4259-b29d-106005feb808\") " pod="kube-system/kube-proxy-5ww2p" Feb 8 23:52:06.634211 kubelet[2113]: I0208 23:52:06.634198 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8l8r\" (UniqueName: \"kubernetes.io/projected/8903eb0e-0971-4259-b29d-106005feb808-kube-api-access-d8l8r\") pod \"kube-proxy-5ww2p\" (UID: \"8903eb0e-0971-4259-b29d-106005feb808\") " pod="kube-system/kube-proxy-5ww2p" Feb 8 23:52:06.634405 kubelet[2113]: I0208 23:52:06.634392 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8903eb0e-0971-4259-b29d-106005feb808-kube-proxy\") pod \"kube-proxy-5ww2p\" (UID: \"8903eb0e-0971-4259-b29d-106005feb808\") " pod="kube-system/kube-proxy-5ww2p" Feb 8 23:52:06.634536 kubelet[2113]: I0208 23:52:06.634524 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8903eb0e-0971-4259-b29d-106005feb808-lib-modules\") pod \"kube-proxy-5ww2p\" (UID: \"8903eb0e-0971-4259-b29d-106005feb808\") " pod="kube-system/kube-proxy-5ww2p" Feb 8 23:52:06.894695 env[1139]: time="2024-02-08T23:52:06.894065914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5ww2p,Uid:8903eb0e-0971-4259-b29d-106005feb808,Namespace:kube-system,Attempt:0,}" Feb 8 23:52:06.934383 env[1139]: time="2024-02-08T23:52:06.931246023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:52:06.934713 env[1139]: time="2024-02-08T23:52:06.934674740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:52:06.934829 env[1139]: time="2024-02-08T23:52:06.934804250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:52:06.935234 env[1139]: time="2024-02-08T23:52:06.935200776Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3acc66bc3a29e76bc16c4e00fed3c1f5809f45b3d466eb68cc3b4c9b1c86b933 pid=2221 runtime=io.containerd.runc.v2 Feb 8 23:52:06.971327 kubelet[2113]: I0208 23:52:06.969726 2113 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:52:07.031254 env[1139]: time="2024-02-08T23:52:07.031208188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5ww2p,Uid:8903eb0e-0971-4259-b29d-106005feb808,Namespace:kube-system,Attempt:0,} returns sandbox id \"3acc66bc3a29e76bc16c4e00fed3c1f5809f45b3d466eb68cc3b4c9b1c86b933\"" Feb 8 23:52:07.035924 env[1139]: time="2024-02-08T23:52:07.035892562Z" level=info msg="CreateContainer within sandbox \"3acc66bc3a29e76bc16c4e00fed3c1f5809f45b3d466eb68cc3b4c9b1c86b933\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:52:07.037045 kubelet[2113]: I0208 23:52:07.036909 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwz6b\" (UniqueName: \"kubernetes.io/projected/b0cc30ce-58c7-43da-865a-5ed29fc6561c-kube-api-access-bwz6b\") pod \"tigera-operator-cfc98749c-k8f6n\" (UID: \"b0cc30ce-58c7-43da-865a-5ed29fc6561c\") " pod="tigera-operator/tigera-operator-cfc98749c-k8f6n" Feb 8 23:52:07.037045 kubelet[2113]: I0208 23:52:07.036976 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b0cc30ce-58c7-43da-865a-5ed29fc6561c-var-lib-calico\") pod \"tigera-operator-cfc98749c-k8f6n\" (UID: \"b0cc30ce-58c7-43da-865a-5ed29fc6561c\") " pod="tigera-operator/tigera-operator-cfc98749c-k8f6n" Feb 8 23:52:07.058789 env[1139]: time="2024-02-08T23:52:07.058743492Z" level=info msg="CreateContainer within sandbox \"3acc66bc3a29e76bc16c4e00fed3c1f5809f45b3d466eb68cc3b4c9b1c86b933\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eb36a16969c870295b33a670a5005c2024e44ad9f0f83751b7203df537777fe0\"" Feb 8 23:52:07.061322 env[1139]: time="2024-02-08T23:52:07.061267079Z" level=info msg="StartContainer for \"eb36a16969c870295b33a670a5005c2024e44ad9f0f83751b7203df537777fe0\"" Feb 8 23:52:07.120453 env[1139]: time="2024-02-08T23:52:07.120416515Z" level=info msg="StartContainer for \"eb36a16969c870295b33a670a5005c2024e44ad9f0f83751b7203df537777fe0\" returns successfully" Feb 8 23:52:07.275516 env[1139]: time="2024-02-08T23:52:07.274263918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-k8f6n,Uid:b0cc30ce-58c7-43da-865a-5ed29fc6561c,Namespace:tigera-operator,Attempt:0,}" Feb 8 23:52:07.337194 env[1139]: time="2024-02-08T23:52:07.336964634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:52:07.337624 env[1139]: time="2024-02-08T23:52:07.337566836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:52:07.337872 env[1139]: time="2024-02-08T23:52:07.337797641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:52:07.338848 env[1139]: time="2024-02-08T23:52:07.338718967Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9de4a611f341af1e9d64119fbdbb2c8fd48aac92154c09da8f2f8d0536efbafd pid=2293 runtime=io.containerd.runc.v2 Feb 8 23:52:07.426482 env[1139]: time="2024-02-08T23:52:07.426445709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-k8f6n,Uid:b0cc30ce-58c7-43da-865a-5ed29fc6561c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9de4a611f341af1e9d64119fbdbb2c8fd48aac92154c09da8f2f8d0536efbafd\"" Feb 8 23:52:07.429628 env[1139]: time="2024-02-08T23:52:07.429599953Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 8 23:52:07.650000 audit[2350]: NETFILTER_CFG table=mangle:59 family=2 entries=1 op=nft_register_chain pid=2350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.661369 kernel: audit: type=1325 audit(1707436327.650:235): table=mangle:59 family=2 entries=1 op=nft_register_chain pid=2350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.650000 audit[2350]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffd0583120 a2=0 a3=7fffd058310c items=0 ppid=2271 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.650000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 8 23:52:07.681449 kernel: audit: type=1300 audit(1707436327.650:235): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffd0583120 a2=0 a3=7fffd058310c items=0 ppid=2271 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.681576 kernel: audit: type=1327 audit(1707436327.650:235): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 8 23:52:07.667000 audit[2351]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.684904 kernel: audit: type=1325 audit(1707436327.667:236): table=nat:60 family=2 entries=1 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.685107 kernel: audit: type=1300 audit(1707436327.667:236): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0df98280 a2=0 a3=7fff0df9826c items=0 ppid=2271 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.667000 audit[2351]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0df98280 a2=0 a3=7fff0df9826c items=0 ppid=2271 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.667000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 8 23:52:07.693438 kernel: audit: type=1327 audit(1707436327.667:236): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 8 23:52:07.694000 audit[2352]: NETFILTER_CFG table=mangle:61 family=10 entries=1 op=nft_register_chain pid=2352 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.694000 audit[2352]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb40f2240 a2=0 a3=7fffb40f222c items=0 ppid=2271 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.704431 kernel: audit: type=1325 audit(1707436327.694:237): table=mangle:61 family=10 entries=1 op=nft_register_chain pid=2352 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.704578 kernel: audit: type=1300 audit(1707436327.694:237): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb40f2240 a2=0 a3=7fffb40f222c items=0 ppid=2271 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.694000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 8 23:52:07.707790 kernel: audit: type=1327 audit(1707436327.694:237): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 8 23:52:07.707000 audit[2354]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=2354 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.707000 audit[2354]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd2a67c360 a2=0 a3=7ffd2a67c34c items=0 ppid=2271 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 8 23:52:07.716336 kernel: audit: type=1325 audit(1707436327.707:238): table=filter:62 family=2 entries=1 op=nft_register_chain pid=2354 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.715000 audit[2355]: NETFILTER_CFG table=nat:63 family=10 entries=1 op=nft_register_chain pid=2355 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.715000 audit[2355]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4f3805a0 a2=0 a3=7ffe4f38058c items=0 ppid=2271 pid=2355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.715000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 8 23:52:07.716000 audit[2356]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_chain pid=2356 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.716000 audit[2356]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0c556be0 a2=0 a3=7ffd0c556bcc items=0 ppid=2271 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.716000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 8 23:52:07.789000 audit[2357]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.789000 audit[2357]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd264487d0 a2=0 a3=7ffd264487bc items=0 ppid=2271 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.789000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 8 23:52:07.796009 systemd[1]: run-containerd-runc-k8s.io-3acc66bc3a29e76bc16c4e00fed3c1f5809f45b3d466eb68cc3b4c9b1c86b933-runc.Jrh9Dz.mount: Deactivated successfully. Feb 8 23:52:07.806000 audit[2359]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2359 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.806000 audit[2359]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe9caccdd0 a2=0 a3=7ffe9caccdbc items=0 ppid=2271 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.806000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 8 23:52:07.819000 audit[2362]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2362 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.819000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc0cde2500 a2=0 a3=7ffc0cde24ec items=0 ppid=2271 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 8 23:52:07.822000 audit[2363]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2363 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.822000 audit[2363]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3dd181b0 a2=0 a3=7ffc3dd1819c items=0 ppid=2271 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.822000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 8 23:52:07.826000 audit[2365]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2365 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.826000 audit[2365]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcb4417d50 a2=0 a3=7ffcb4417d3c items=0 ppid=2271 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 8 23:52:07.829000 audit[2366]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2366 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.829000 audit[2366]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff32fcf20 a2=0 a3=7ffff32fcf0c items=0 ppid=2271 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.829000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 8 23:52:07.836000 audit[2368]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2368 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.836000 audit[2368]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcd28cc730 a2=0 a3=7ffcd28cc71c items=0 ppid=2271 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.836000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 8 23:52:07.848000 audit[2371]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2371 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.848000 audit[2371]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeb3aa6b70 a2=0 a3=7ffeb3aa6b5c items=0 ppid=2271 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.848000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 8 23:52:07.850000 audit[2372]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=2372 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.850000 audit[2372]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe54a65450 a2=0 a3=7ffe54a6543c items=0 ppid=2271 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.850000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 8 23:52:07.855000 audit[2374]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=2374 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.855000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe668ee060 a2=0 a3=7ffe668ee04c items=0 ppid=2271 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.855000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 8 23:52:07.857000 audit[2375]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_chain pid=2375 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.857000 audit[2375]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe96e2a6c0 a2=0 a3=7ffe96e2a6ac items=0 ppid=2271 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.857000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 8 23:52:07.860000 audit[2377]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=2377 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.860000 audit[2377]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd7e1d1f90 a2=0 a3=7ffd7e1d1f7c items=0 ppid=2271 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.860000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 8 23:52:07.865000 audit[2380]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_rule pid=2380 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.865000 audit[2380]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc77ade480 a2=0 a3=7ffc77ade46c items=0 ppid=2271 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.865000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 8 23:52:07.872000 audit[2383]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=2383 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.872000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd0ed1c460 a2=0 a3=7ffd0ed1c44c items=0 ppid=2271 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.872000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 8 23:52:07.874000 audit[2384]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.874000 audit[2384]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc63a9b3a0 a2=0 a3=7ffc63a9b38c items=0 ppid=2271 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.874000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 8 23:52:07.877000 audit[2386]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_rule pid=2386 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.877000 audit[2386]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd8efc2400 a2=0 a3=7ffd8efc23ec items=0 ppid=2271 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.877000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 8 23:52:07.881000 audit[2389]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=2389 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:52:07.881000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcc44ef6a0 a2=0 a3=7ffcc44ef68c items=0 ppid=2271 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.881000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 8 23:52:07.907000 audit[2393]: NETFILTER_CFG table=filter:82 family=2 entries=6 op=nft_register_rule pid=2393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:07.907000 audit[2393]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7fff4c15a710 a2=0 a3=7fff4c15a6fc items=0 ppid=2271 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.907000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:07.919000 audit[2393]: NETFILTER_CFG table=nat:83 family=2 entries=17 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:07.919000 audit[2393]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fff4c15a710 a2=0 a3=7fff4c15a6fc items=0 ppid=2271 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.919000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:07.921000 audit[2398]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.921000 audit[2398]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffe3ffefd0 a2=0 a3=7fffe3ffefbc items=0 ppid=2271 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.921000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 8 23:52:07.925000 audit[2400]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=2400 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.925000 audit[2400]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff3ffa93c0 a2=0 a3=7fff3ffa93ac items=0 ppid=2271 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.925000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 8 23:52:07.929000 audit[2403]: NETFILTER_CFG table=filter:86 family=10 entries=2 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.929000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc16b3f090 a2=0 a3=7ffc16b3f07c items=0 ppid=2271 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.929000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 8 23:52:07.930000 audit[2404]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.930000 audit[2404]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9f9acd20 a2=0 a3=7ffd9f9acd0c items=0 ppid=2271 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.930000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 8 23:52:07.934000 audit[2406]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.934000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcb27a4400 a2=0 a3=7ffcb27a43ec items=0 ppid=2271 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.934000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 8 23:52:07.936000 audit[2407]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.936000 audit[2407]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd00e5bf70 a2=0 a3=7ffd00e5bf5c items=0 ppid=2271 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 8 23:52:07.945000 audit[2409]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=2409 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.945000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffd548fed0 a2=0 a3=7fffd548febc items=0 ppid=2271 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.945000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 8 23:52:07.950000 audit[2412]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.950000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fffe633ebf0 a2=0 a3=7fffe633ebdc items=0 ppid=2271 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.950000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 8 23:52:07.952000 audit[2413]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.952000 audit[2413]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd33137100 a2=0 a3=7ffd331370ec items=0 ppid=2271 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.952000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 8 23:52:07.955000 audit[2415]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=2415 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.955000 audit[2415]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff60c1c5b0 a2=0 a3=7fff60c1c59c items=0 ppid=2271 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.955000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 8 23:52:07.956000 audit[2416]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.956000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee791a290 a2=0 a3=7ffee791a27c items=0 ppid=2271 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.956000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 8 23:52:07.959000 audit[2418]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=2418 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.959000 audit[2418]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc5a1d3ef0 a2=0 a3=7ffc5a1d3edc items=0 ppid=2271 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.959000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 8 23:52:07.963000 audit[2421]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=2421 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.963000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffccab7f990 a2=0 a3=7ffccab7f97c items=0 ppid=2271 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.963000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 8 23:52:07.969000 audit[2424]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=2424 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.969000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcd36bf380 a2=0 a3=7ffcd36bf36c items=0 ppid=2271 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.969000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 8 23:52:07.970000 audit[2425]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.970000 audit[2425]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffc9f57850 a2=0 a3=7fffc9f5783c items=0 ppid=2271 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.970000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 8 23:52:07.973000 audit[2427]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=2427 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.973000 audit[2427]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffcf2ea1180 a2=0 a3=7ffcf2ea116c items=0 ppid=2271 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.973000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 8 23:52:07.977000 audit[2430]: NETFILTER_CFG table=nat:100 family=10 entries=2 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:52:07.977000 audit[2430]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffddbde6f50 a2=0 a3=7ffddbde6f3c items=0 ppid=2271 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.977000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 8 23:52:07.984000 audit[2434]: NETFILTER_CFG table=filter:101 family=10 entries=3 op=nft_register_rule pid=2434 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 8 23:52:07.984000 audit[2434]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffcaa234e60 a2=0 a3=7ffcaa234e4c items=0 ppid=2271 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.984000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:07.985000 audit[2434]: NETFILTER_CFG table=nat:102 family=10 entries=10 op=nft_register_chain pid=2434 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 8 23:52:07.985000 audit[2434]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffcaa234e60 a2=0 a3=7ffcaa234e4c items=0 ppid=2271 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:07.985000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:10.450432 env[1139]: time="2024-02-08T23:52:10.450267053Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:10.455435 env[1139]: time="2024-02-08T23:52:10.455278529Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:10.460207 env[1139]: time="2024-02-08T23:52:10.460181237Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:10.463941 env[1139]: time="2024-02-08T23:52:10.463918831Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:10.465828 env[1139]: time="2024-02-08T23:52:10.465799842Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827\"" Feb 8 23:52:10.477308 env[1139]: time="2024-02-08T23:52:10.477259852Z" level=info msg="CreateContainer within sandbox \"9de4a611f341af1e9d64119fbdbb2c8fd48aac92154c09da8f2f8d0536efbafd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 8 23:52:10.500371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount228322183.mount: Deactivated successfully. Feb 8 23:52:10.507043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3092996472.mount: Deactivated successfully. Feb 8 23:52:10.512965 env[1139]: time="2024-02-08T23:52:10.512927851Z" level=info msg="CreateContainer within sandbox \"9de4a611f341af1e9d64119fbdbb2c8fd48aac92154c09da8f2f8d0536efbafd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"33ec218fc6e11e05467c5a754add65845a41a578cea0346f680cc0aee7338119\"" Feb 8 23:52:10.517511 env[1139]: time="2024-02-08T23:52:10.517470977Z" level=info msg="StartContainer for \"33ec218fc6e11e05467c5a754add65845a41a578cea0346f680cc0aee7338119\"" Feb 8 23:52:10.591727 env[1139]: time="2024-02-08T23:52:10.591686939Z" level=info msg="StartContainer for \"33ec218fc6e11e05467c5a754add65845a41a578cea0346f680cc0aee7338119\" returns successfully" Feb 8 23:52:11.014991 kubelet[2113]: I0208 23:52:11.014051 2113 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5ww2p" podStartSLOduration=5.013980078 pod.CreationTimestamp="2024-02-08 23:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:52:08.007937766 +0000 UTC m=+14.440554714" watchObservedRunningTime="2024-02-08 23:52:11.013980078 +0000 UTC m=+17.446597056" Feb 8 23:52:12.704000 audit[2499]: NETFILTER_CFG table=filter:103 family=2 entries=13 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:12.706867 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 8 23:52:12.706942 kernel: audit: type=1325 audit(1707436332.704:279): table=filter:103 family=2 entries=13 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:12.704000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffceb95ef10 a2=0 a3=7ffceb95eefc items=0 ppid=2271 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:12.716757 kernel: audit: type=1300 audit(1707436332.704:279): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffceb95ef10 a2=0 a3=7ffceb95eefc items=0 ppid=2271 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:12.704000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:12.720216 kernel: audit: type=1327 audit(1707436332.704:279): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:12.704000 audit[2499]: NETFILTER_CFG table=nat:104 family=2 entries=20 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:12.704000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffceb95ef10 a2=0 a3=7ffceb95eefc items=0 ppid=2271 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:12.729116 kernel: audit: type=1325 audit(1707436332.704:280): table=nat:104 family=2 entries=20 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:12.729174 kernel: audit: type=1300 audit(1707436332.704:280): arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffceb95ef10 a2=0 a3=7ffceb95eefc items=0 ppid=2271 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:12.704000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:12.732330 kernel: audit: type=1327 audit(1707436332.704:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:12.761000 audit[2525]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:12.761000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffee27972c0 a2=0 a3=7ffee27972ac items=0 ppid=2271 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:12.771971 kernel: audit: type=1325 audit(1707436332.761:281): table=filter:105 family=2 entries=14 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:12.772035 kernel: audit: type=1300 audit(1707436332.761:281): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffee27972c0 a2=0 a3=7ffee27972ac items=0 ppid=2271 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:12.761000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:12.777329 kernel: audit: type=1327 audit(1707436332.761:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:12.761000 audit[2525]: NETFILTER_CFG table=nat:106 family=2 entries=20 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:12.761000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffee27972c0 a2=0 a3=7ffee27972ac items=0 ppid=2271 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:12.761000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:12.782337 kernel: audit: type=1325 audit(1707436332.761:282): table=nat:106 family=2 entries=20 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:12.859077 kubelet[2113]: I0208 23:52:12.859037 2113 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-k8f6n" podStartSLOduration=-9.2233720299958e+09 pod.CreationTimestamp="2024-02-08 23:52:06 +0000 UTC" firstStartedPulling="2024-02-08 23:52:07.427734295 +0000 UTC m=+13.860351243" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:52:11.0163827 +0000 UTC m=+17.448999688" watchObservedRunningTime="2024-02-08 23:52:12.85897604 +0000 UTC m=+19.291592999" Feb 8 23:52:12.859562 kubelet[2113]: I0208 23:52:12.859185 2113 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:52:12.867405 kubelet[2113]: W0208 23:52:12.867366 2113 reflector.go:424] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-3510-3-2-a-bd3a159777.novalocal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3510-3-2-a-bd3a159777.novalocal' and this object Feb 8 23:52:12.867405 kubelet[2113]: E0208 23:52:12.867443 2113 reflector.go:140] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-3510-3-2-a-bd3a159777.novalocal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3510-3-2-a-bd3a159777.novalocal' and this object Feb 8 23:52:12.868589 kubelet[2113]: W0208 23:52:12.868557 2113 reflector.go:424] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-3510-3-2-a-bd3a159777.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3510-3-2-a-bd3a159777.novalocal' and this object Feb 8 23:52:12.868589 kubelet[2113]: E0208 23:52:12.868586 2113 reflector.go:140] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-3510-3-2-a-bd3a159777.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3510-3-2-a-bd3a159777.novalocal' and this object Feb 8 23:52:12.877389 kubelet[2113]: I0208 23:52:12.877356 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62280aaa-3dae-4a09-9ba8-5e48f48717b3-tigera-ca-bundle\") pod \"calico-typha-5986b89f5d-m5k26\" (UID: \"62280aaa-3dae-4a09-9ba8-5e48f48717b3\") " pod="calico-system/calico-typha-5986b89f5d-m5k26" Feb 8 23:52:12.877618 kubelet[2113]: I0208 23:52:12.877604 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/62280aaa-3dae-4a09-9ba8-5e48f48717b3-typha-certs\") pod \"calico-typha-5986b89f5d-m5k26\" (UID: \"62280aaa-3dae-4a09-9ba8-5e48f48717b3\") " pod="calico-system/calico-typha-5986b89f5d-m5k26" Feb 8 23:52:12.877730 kubelet[2113]: I0208 23:52:12.877719 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmkmg\" (UniqueName: \"kubernetes.io/projected/62280aaa-3dae-4a09-9ba8-5e48f48717b3-kube-api-access-fmkmg\") pod \"calico-typha-5986b89f5d-m5k26\" (UID: \"62280aaa-3dae-4a09-9ba8-5e48f48717b3\") " pod="calico-system/calico-typha-5986b89f5d-m5k26" Feb 8 23:52:13.312957 kubelet[2113]: I0208 23:52:13.312920 2113 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:52:13.381984 kubelet[2113]: I0208 23:52:13.381959 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13103037-43bc-416d-a96b-e9fd1f75ed85-lib-modules\") pod \"calico-node-ts67l\" (UID: \"13103037-43bc-416d-a96b-e9fd1f75ed85\") " pod="calico-system/calico-node-ts67l" Feb 8 23:52:13.382189 kubelet[2113]: I0208 23:52:13.382177 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/13103037-43bc-416d-a96b-e9fd1f75ed85-policysync\") pod \"calico-node-ts67l\" (UID: \"13103037-43bc-416d-a96b-e9fd1f75ed85\") " pod="calico-system/calico-node-ts67l" Feb 8 23:52:13.382320 kubelet[2113]: I0208 23:52:13.382288 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/13103037-43bc-416d-a96b-e9fd1f75ed85-flexvol-driver-host\") pod \"calico-node-ts67l\" (UID: \"13103037-43bc-416d-a96b-e9fd1f75ed85\") " pod="calico-system/calico-node-ts67l" Feb 8 23:52:13.382429 kubelet[2113]: I0208 23:52:13.382418 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q74s\" (UniqueName: \"kubernetes.io/projected/13103037-43bc-416d-a96b-e9fd1f75ed85-kube-api-access-7q74s\") pod \"calico-node-ts67l\" (UID: \"13103037-43bc-416d-a96b-e9fd1f75ed85\") " pod="calico-system/calico-node-ts67l" Feb 8 23:52:13.382541 kubelet[2113]: I0208 23:52:13.382531 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/13103037-43bc-416d-a96b-e9fd1f75ed85-cni-bin-dir\") pod \"calico-node-ts67l\" (UID: \"13103037-43bc-416d-a96b-e9fd1f75ed85\") " pod="calico-system/calico-node-ts67l" Feb 8 23:52:13.382643 kubelet[2113]: I0208 23:52:13.382632 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/13103037-43bc-416d-a96b-e9fd1f75ed85-cni-net-dir\") pod \"calico-node-ts67l\" (UID: \"13103037-43bc-416d-a96b-e9fd1f75ed85\") " pod="calico-system/calico-node-ts67l" Feb 8 23:52:13.382744 kubelet[2113]: I0208 23:52:13.382734 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/13103037-43bc-416d-a96b-e9fd1f75ed85-cni-log-dir\") pod \"calico-node-ts67l\" (UID: \"13103037-43bc-416d-a96b-e9fd1f75ed85\") " pod="calico-system/calico-node-ts67l" Feb 8 23:52:13.382845 kubelet[2113]: I0208 23:52:13.382835 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/13103037-43bc-416d-a96b-e9fd1f75ed85-node-certs\") pod \"calico-node-ts67l\" (UID: \"13103037-43bc-416d-a96b-e9fd1f75ed85\") " pod="calico-system/calico-node-ts67l" Feb 8 23:52:13.382945 kubelet[2113]: I0208 23:52:13.382935 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/13103037-43bc-416d-a96b-e9fd1f75ed85-var-run-calico\") pod \"calico-node-ts67l\" (UID: \"13103037-43bc-416d-a96b-e9fd1f75ed85\") " pod="calico-system/calico-node-ts67l" Feb 8 23:52:13.383042 kubelet[2113]: I0208 23:52:13.383031 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/13103037-43bc-416d-a96b-e9fd1f75ed85-var-lib-calico\") pod \"calico-node-ts67l\" (UID: \"13103037-43bc-416d-a96b-e9fd1f75ed85\") " pod="calico-system/calico-node-ts67l" Feb 8 23:52:13.383153 kubelet[2113]: I0208 23:52:13.383142 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13103037-43bc-416d-a96b-e9fd1f75ed85-xtables-lock\") pod \"calico-node-ts67l\" (UID: \"13103037-43bc-416d-a96b-e9fd1f75ed85\") " pod="calico-system/calico-node-ts67l" Feb 8 23:52:13.383264 kubelet[2113]: I0208 23:52:13.383253 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13103037-43bc-416d-a96b-e9fd1f75ed85-tigera-ca-bundle\") pod \"calico-node-ts67l\" (UID: \"13103037-43bc-416d-a96b-e9fd1f75ed85\") " pod="calico-system/calico-node-ts67l" Feb 8 23:52:13.428397 kubelet[2113]: I0208 23:52:13.428341 2113 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:52:13.428766 kubelet[2113]: E0208 23:52:13.428722 2113 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp9fc" podUID=c90b1627-bdd7-4b0e-9b33-829b081056fe Feb 8 23:52:13.484418 kubelet[2113]: I0208 23:52:13.484365 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c90b1627-bdd7-4b0e-9b33-829b081056fe-socket-dir\") pod \"csi-node-driver-cp9fc\" (UID: \"c90b1627-bdd7-4b0e-9b33-829b081056fe\") " pod="calico-system/csi-node-driver-cp9fc" Feb 8 23:52:13.484846 kubelet[2113]: I0208 23:52:13.484824 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvwl6\" (UniqueName: \"kubernetes.io/projected/c90b1627-bdd7-4b0e-9b33-829b081056fe-kube-api-access-rvwl6\") pod \"csi-node-driver-cp9fc\" (UID: \"c90b1627-bdd7-4b0e-9b33-829b081056fe\") " pod="calico-system/csi-node-driver-cp9fc" Feb 8 23:52:13.485164 kubelet[2113]: I0208 23:52:13.485142 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c90b1627-bdd7-4b0e-9b33-829b081056fe-varrun\") pod \"csi-node-driver-cp9fc\" (UID: \"c90b1627-bdd7-4b0e-9b33-829b081056fe\") " pod="calico-system/csi-node-driver-cp9fc" Feb 8 23:52:13.485383 kubelet[2113]: E0208 23:52:13.485343 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.485383 kubelet[2113]: W0208 23:52:13.485367 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.485543 kubelet[2113]: E0208 23:52:13.485401 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.485602 kubelet[2113]: E0208 23:52:13.485557 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.485602 kubelet[2113]: W0208 23:52:13.485566 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.485602 kubelet[2113]: E0208 23:52:13.485582 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.485750 kubelet[2113]: E0208 23:52:13.485722 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.485750 kubelet[2113]: W0208 23:52:13.485732 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.485750 kubelet[2113]: E0208 23:52:13.485743 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.485980 kubelet[2113]: E0208 23:52:13.485904 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.485980 kubelet[2113]: W0208 23:52:13.485919 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.485980 kubelet[2113]: E0208 23:52:13.485935 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.486144 kubelet[2113]: E0208 23:52:13.486065 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.486144 kubelet[2113]: W0208 23:52:13.486074 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.486144 kubelet[2113]: E0208 23:52:13.486089 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.486293 kubelet[2113]: E0208 23:52:13.486213 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.486293 kubelet[2113]: W0208 23:52:13.486222 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.486293 kubelet[2113]: E0208 23:52:13.486237 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.486474 kubelet[2113]: E0208 23:52:13.486384 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.486474 kubelet[2113]: W0208 23:52:13.486393 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.486593 kubelet[2113]: E0208 23:52:13.486476 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.486593 kubelet[2113]: E0208 23:52:13.486590 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.486714 kubelet[2113]: W0208 23:52:13.486599 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.486714 kubelet[2113]: E0208 23:52:13.486688 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.486812 kubelet[2113]: E0208 23:52:13.486771 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.486812 kubelet[2113]: W0208 23:52:13.486779 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.486912 kubelet[2113]: E0208 23:52:13.486852 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.486969 kubelet[2113]: E0208 23:52:13.486935 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.486969 kubelet[2113]: W0208 23:52:13.486943 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.487063 kubelet[2113]: E0208 23:52:13.487007 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.487115 kubelet[2113]: E0208 23:52:13.487094 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.487115 kubelet[2113]: W0208 23:52:13.487102 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.487208 kubelet[2113]: E0208 23:52:13.487162 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.487264 kubelet[2113]: E0208 23:52:13.487250 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.487264 kubelet[2113]: W0208 23:52:13.487258 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.487404 kubelet[2113]: E0208 23:52:13.487274 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.487484 kubelet[2113]: E0208 23:52:13.487463 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.487484 kubelet[2113]: W0208 23:52:13.487472 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.487586 kubelet[2113]: E0208 23:52:13.487490 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.487643 kubelet[2113]: E0208 23:52:13.487633 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.487643 kubelet[2113]: W0208 23:52:13.487643 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.487740 kubelet[2113]: E0208 23:52:13.487656 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.490163 kubelet[2113]: E0208 23:52:13.488019 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.490163 kubelet[2113]: W0208 23:52:13.488033 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.490163 kubelet[2113]: E0208 23:52:13.488105 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.490163 kubelet[2113]: E0208 23:52:13.488208 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.490163 kubelet[2113]: W0208 23:52:13.488216 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.490163 kubelet[2113]: E0208 23:52:13.488277 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.490163 kubelet[2113]: E0208 23:52:13.488396 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.490163 kubelet[2113]: W0208 23:52:13.488403 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.490163 kubelet[2113]: E0208 23:52:13.488463 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.490163 kubelet[2113]: E0208 23:52:13.488558 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.490562 kubelet[2113]: W0208 23:52:13.488566 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.490562 kubelet[2113]: E0208 23:52:13.488635 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.490562 kubelet[2113]: E0208 23:52:13.488717 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.490562 kubelet[2113]: W0208 23:52:13.488724 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.490562 kubelet[2113]: E0208 23:52:13.488783 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.490562 kubelet[2113]: E0208 23:52:13.488863 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.490562 kubelet[2113]: W0208 23:52:13.488871 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.490562 kubelet[2113]: E0208 23:52:13.488971 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.490562 kubelet[2113]: E0208 23:52:13.489054 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.490562 kubelet[2113]: W0208 23:52:13.489062 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.490829 kubelet[2113]: E0208 23:52:13.489076 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.490829 kubelet[2113]: E0208 23:52:13.489192 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.490829 kubelet[2113]: W0208 23:52:13.489200 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.490829 kubelet[2113]: E0208 23:52:13.489214 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.490829 kubelet[2113]: E0208 23:52:13.489464 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.490829 kubelet[2113]: W0208 23:52:13.489472 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.490829 kubelet[2113]: E0208 23:52:13.489487 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.490829 kubelet[2113]: E0208 23:52:13.489669 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.490829 kubelet[2113]: W0208 23:52:13.489678 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.490829 kubelet[2113]: E0208 23:52:13.489719 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.491373 kubelet[2113]: E0208 23:52:13.491259 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.491373 kubelet[2113]: W0208 23:52:13.491277 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.491554 kubelet[2113]: E0208 23:52:13.491427 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.494328 kubelet[2113]: E0208 23:52:13.491690 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.494328 kubelet[2113]: W0208 23:52:13.491706 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.494328 kubelet[2113]: E0208 23:52:13.491765 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.494328 kubelet[2113]: E0208 23:52:13.491859 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.494328 kubelet[2113]: W0208 23:52:13.491868 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.494328 kubelet[2113]: E0208 23:52:13.491929 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.494328 kubelet[2113]: E0208 23:52:13.492041 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.494328 kubelet[2113]: W0208 23:52:13.492052 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.494328 kubelet[2113]: E0208 23:52:13.492114 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.494328 kubelet[2113]: E0208 23:52:13.492201 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.494923 kubelet[2113]: W0208 23:52:13.492209 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.494923 kubelet[2113]: E0208 23:52:13.492281 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.494923 kubelet[2113]: E0208 23:52:13.492415 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.494923 kubelet[2113]: W0208 23:52:13.492423 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.494923 kubelet[2113]: E0208 23:52:13.492447 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.502031 kubelet[2113]: E0208 23:52:13.501994 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.502031 kubelet[2113]: W0208 23:52:13.502024 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.502217 kubelet[2113]: E0208 23:52:13.502056 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.504545 kubelet[2113]: E0208 23:52:13.504493 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.504545 kubelet[2113]: W0208 23:52:13.504542 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.504714 kubelet[2113]: E0208 23:52:13.504572 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.504860 kubelet[2113]: E0208 23:52:13.504835 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.504860 kubelet[2113]: W0208 23:52:13.504854 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.504970 kubelet[2113]: E0208 23:52:13.504945 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.505032 kubelet[2113]: I0208 23:52:13.504981 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c90b1627-bdd7-4b0e-9b33-829b081056fe-kubelet-dir\") pod \"csi-node-driver-cp9fc\" (UID: \"c90b1627-bdd7-4b0e-9b33-829b081056fe\") " pod="calico-system/csi-node-driver-cp9fc" Feb 8 23:52:13.505141 kubelet[2113]: E0208 23:52:13.505117 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.505141 kubelet[2113]: W0208 23:52:13.505137 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.505254 kubelet[2113]: E0208 23:52:13.505229 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.505385 kubelet[2113]: E0208 23:52:13.505374 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.505440 kubelet[2113]: W0208 23:52:13.505385 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.505500 kubelet[2113]: E0208 23:52:13.505470 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.505625 kubelet[2113]: E0208 23:52:13.505602 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.505625 kubelet[2113]: W0208 23:52:13.505622 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.505745 kubelet[2113]: E0208 23:52:13.505643 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.505877 kubelet[2113]: E0208 23:52:13.505853 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.505877 kubelet[2113]: W0208 23:52:13.505872 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.506009 kubelet[2113]: E0208 23:52:13.505893 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.506136 kubelet[2113]: E0208 23:52:13.506111 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.506395 kubelet[2113]: W0208 23:52:13.506360 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.507039 kubelet[2113]: E0208 23:52:13.507013 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.507379 kubelet[2113]: E0208 23:52:13.507292 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.507379 kubelet[2113]: W0208 23:52:13.507377 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.507516 kubelet[2113]: E0208 23:52:13.507471 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.507575 kubelet[2113]: I0208 23:52:13.507540 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c90b1627-bdd7-4b0e-9b33-829b081056fe-registration-dir\") pod \"csi-node-driver-cp9fc\" (UID: \"c90b1627-bdd7-4b0e-9b33-829b081056fe\") " pod="calico-system/csi-node-driver-cp9fc" Feb 8 23:52:13.507819 kubelet[2113]: E0208 23:52:13.507794 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.507819 kubelet[2113]: W0208 23:52:13.507815 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.507940 kubelet[2113]: E0208 23:52:13.507904 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.508042 kubelet[2113]: E0208 23:52:13.508020 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.508042 kubelet[2113]: W0208 23:52:13.508039 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.510323 kubelet[2113]: E0208 23:52:13.508160 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.510323 kubelet[2113]: E0208 23:52:13.508284 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.510323 kubelet[2113]: W0208 23:52:13.508326 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.510323 kubelet[2113]: E0208 23:52:13.508410 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.510323 kubelet[2113]: E0208 23:52:13.508540 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.510323 kubelet[2113]: W0208 23:52:13.508551 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.510323 kubelet[2113]: E0208 23:52:13.508571 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.510323 kubelet[2113]: E0208 23:52:13.508867 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.510323 kubelet[2113]: W0208 23:52:13.508878 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.510323 kubelet[2113]: E0208 23:52:13.508900 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.510940 kubelet[2113]: E0208 23:52:13.509154 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.510940 kubelet[2113]: W0208 23:52:13.509166 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.510940 kubelet[2113]: E0208 23:52:13.510110 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.510940 kubelet[2113]: E0208 23:52:13.510377 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.510940 kubelet[2113]: W0208 23:52:13.510392 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.510940 kubelet[2113]: E0208 23:52:13.510414 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.510940 kubelet[2113]: E0208 23:52:13.510697 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.510940 kubelet[2113]: W0208 23:52:13.510709 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.510940 kubelet[2113]: E0208 23:52:13.510730 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.510940 kubelet[2113]: E0208 23:52:13.510940 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.511441 kubelet[2113]: W0208 23:52:13.510951 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.511441 kubelet[2113]: E0208 23:52:13.510972 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.511441 kubelet[2113]: E0208 23:52:13.511185 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.511441 kubelet[2113]: W0208 23:52:13.511196 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.511441 kubelet[2113]: E0208 23:52:13.511257 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.511721 kubelet[2113]: E0208 23:52:13.511512 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.511721 kubelet[2113]: W0208 23:52:13.511523 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.511721 kubelet[2113]: E0208 23:52:13.511542 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.511721 kubelet[2113]: E0208 23:52:13.511704 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.511721 kubelet[2113]: W0208 23:52:13.511716 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.511955 kubelet[2113]: E0208 23:52:13.511733 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.513827 kubelet[2113]: E0208 23:52:13.512089 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.513827 kubelet[2113]: W0208 23:52:13.512108 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.513827 kubelet[2113]: E0208 23:52:13.512126 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.537117 kubelet[2113]: E0208 23:52:13.537078 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.537117 kubelet[2113]: W0208 23:52:13.537099 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.537117 kubelet[2113]: E0208 23:52:13.537119 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.612429 kubelet[2113]: E0208 23:52:13.612341 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.612429 kubelet[2113]: W0208 23:52:13.612362 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.612429 kubelet[2113]: E0208 23:52:13.612384 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.616318 kubelet[2113]: E0208 23:52:13.612746 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.616318 kubelet[2113]: W0208 23:52:13.612762 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.616318 kubelet[2113]: E0208 23:52:13.612780 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.616318 kubelet[2113]: E0208 23:52:13.612997 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.616318 kubelet[2113]: W0208 23:52:13.613006 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.616318 kubelet[2113]: E0208 23:52:13.613024 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.616318 kubelet[2113]: E0208 23:52:13.613205 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.616318 kubelet[2113]: W0208 23:52:13.613215 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.616318 kubelet[2113]: E0208 23:52:13.613231 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.616611 kubelet[2113]: E0208 23:52:13.616379 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.616611 kubelet[2113]: W0208 23:52:13.616393 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.616611 kubelet[2113]: E0208 23:52:13.616469 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.616692 kubelet[2113]: E0208 23:52:13.616643 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.616692 kubelet[2113]: W0208 23:52:13.616652 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.616751 kubelet[2113]: E0208 23:52:13.616725 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.616835 kubelet[2113]: E0208 23:52:13.616815 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.616835 kubelet[2113]: W0208 23:52:13.616831 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.617779 kubelet[2113]: E0208 23:52:13.616899 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.617779 kubelet[2113]: E0208 23:52:13.617005 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.617779 kubelet[2113]: W0208 23:52:13.617014 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.617779 kubelet[2113]: E0208 23:52:13.617079 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.617779 kubelet[2113]: E0208 23:52:13.617168 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.617779 kubelet[2113]: W0208 23:52:13.617176 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.617779 kubelet[2113]: E0208 23:52:13.617191 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.618003 kubelet[2113]: E0208 23:52:13.617973 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.618003 kubelet[2113]: W0208 23:52:13.617984 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.618079 kubelet[2113]: E0208 23:52:13.618057 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.618205 kubelet[2113]: E0208 23:52:13.618190 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.618205 kubelet[2113]: W0208 23:52:13.618204 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.618313 kubelet[2113]: E0208 23:52:13.618275 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.620511 kubelet[2113]: E0208 23:52:13.620485 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.620511 kubelet[2113]: W0208 23:52:13.620508 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.620630 kubelet[2113]: E0208 23:52:13.620612 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.620722 kubelet[2113]: E0208 23:52:13.620706 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.620722 kubelet[2113]: W0208 23:52:13.620721 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.620811 kubelet[2113]: E0208 23:52:13.620790 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.620892 kubelet[2113]: E0208 23:52:13.620877 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.620892 kubelet[2113]: W0208 23:52:13.620890 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.620991 kubelet[2113]: E0208 23:52:13.620974 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.621086 kubelet[2113]: E0208 23:52:13.621072 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.621086 kubelet[2113]: W0208 23:52:13.621085 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.621167 kubelet[2113]: E0208 23:52:13.621155 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.621323 kubelet[2113]: E0208 23:52:13.621269 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.621323 kubelet[2113]: W0208 23:52:13.621283 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.621417 kubelet[2113]: E0208 23:52:13.621402 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.621511 kubelet[2113]: E0208 23:52:13.621497 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.621552 kubelet[2113]: W0208 23:52:13.621510 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.621586 kubelet[2113]: E0208 23:52:13.621576 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.622723 kubelet[2113]: E0208 23:52:13.622708 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.622723 kubelet[2113]: W0208 23:52:13.622724 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.622833 kubelet[2113]: E0208 23:52:13.622744 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.622962 kubelet[2113]: E0208 23:52:13.622948 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.622999 kubelet[2113]: W0208 23:52:13.622963 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.622999 kubelet[2113]: E0208 23:52:13.622977 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.623194 kubelet[2113]: E0208 23:52:13.623177 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.623194 kubelet[2113]: W0208 23:52:13.623192 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.623270 kubelet[2113]: E0208 23:52:13.623206 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.624728 kubelet[2113]: E0208 23:52:13.624397 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.624728 kubelet[2113]: W0208 23:52:13.624410 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.624728 kubelet[2113]: E0208 23:52:13.624432 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.624728 kubelet[2113]: E0208 23:52:13.624611 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.624728 kubelet[2113]: W0208 23:52:13.624620 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.624728 kubelet[2113]: E0208 23:52:13.624632 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.625622 kubelet[2113]: E0208 23:52:13.625023 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.625622 kubelet[2113]: W0208 23:52:13.625036 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.625622 kubelet[2113]: E0208 23:52:13.625052 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.625622 kubelet[2113]: E0208 23:52:13.625234 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.625622 kubelet[2113]: W0208 23:52:13.625243 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.625622 kubelet[2113]: E0208 23:52:13.625334 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.625622 kubelet[2113]: E0208 23:52:13.625474 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.625622 kubelet[2113]: W0208 23:52:13.625483 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.625622 kubelet[2113]: E0208 23:52:13.625589 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.626046 kubelet[2113]: E0208 23:52:13.625940 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.626046 kubelet[2113]: W0208 23:52:13.625950 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.626046 kubelet[2113]: E0208 23:52:13.625985 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.626338 kubelet[2113]: E0208 23:52:13.626199 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.626338 kubelet[2113]: W0208 23:52:13.626210 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.626338 kubelet[2113]: E0208 23:52:13.626233 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.626548 kubelet[2113]: E0208 23:52:13.626507 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.626548 kubelet[2113]: W0208 23:52:13.626517 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.626548 kubelet[2113]: E0208 23:52:13.626530 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.677130 kubelet[2113]: E0208 23:52:13.677092 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.677130 kubelet[2113]: W0208 23:52:13.677111 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.677130 kubelet[2113]: E0208 23:52:13.677130 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.722253 kubelet[2113]: E0208 23:52:13.722231 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.722463 kubelet[2113]: W0208 23:52:13.722447 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.722566 kubelet[2113]: E0208 23:52:13.722555 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.722827 kubelet[2113]: E0208 23:52:13.722815 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.722901 kubelet[2113]: W0208 23:52:13.722889 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.722978 kubelet[2113]: E0208 23:52:13.722968 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.723436 kubelet[2113]: E0208 23:52:13.723426 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.723543 kubelet[2113]: W0208 23:52:13.723530 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.723618 kubelet[2113]: E0208 23:52:13.723608 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.824428 kubelet[2113]: E0208 23:52:13.824385 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.824428 kubelet[2113]: W0208 23:52:13.824419 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.824428 kubelet[2113]: E0208 23:52:13.824445 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.824669 kubelet[2113]: E0208 23:52:13.824648 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.824669 kubelet[2113]: W0208 23:52:13.824663 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.824769 kubelet[2113]: E0208 23:52:13.824675 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.824950 kubelet[2113]: E0208 23:52:13.824924 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.824950 kubelet[2113]: W0208 23:52:13.824938 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.824950 kubelet[2113]: E0208 23:52:13.824951 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.868000 audit[2643]: NETFILTER_CFG table=filter:107 family=2 entries=14 op=nft_register_rule pid=2643 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:13.868000 audit[2643]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffc9575fd50 a2=0 a3=7ffc9575fd3c items=0 ppid=2271 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:13.868000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:13.871000 audit[2643]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=2643 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:13.871000 audit[2643]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffc9575fd50 a2=0 a3=7ffc9575fd3c items=0 ppid=2271 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:13.871000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:13.925736 kubelet[2113]: E0208 23:52:13.925718 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.926111 kubelet[2113]: W0208 23:52:13.926097 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.926203 kubelet[2113]: E0208 23:52:13.926192 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.926562 kubelet[2113]: E0208 23:52:13.926540 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.926670 kubelet[2113]: W0208 23:52:13.926657 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.926753 kubelet[2113]: E0208 23:52:13.926743 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.927028 kubelet[2113]: E0208 23:52:13.927017 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:13.927100 kubelet[2113]: W0208 23:52:13.927090 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:13.927165 kubelet[2113]: E0208 23:52:13.927155 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:13.979516 kubelet[2113]: E0208 23:52:13.979493 2113 configmap.go:199] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 8 23:52:13.979791 kubelet[2113]: E0208 23:52:13.979775 2113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62280aaa-3dae-4a09-9ba8-5e48f48717b3-tigera-ca-bundle podName:62280aaa-3dae-4a09-9ba8-5e48f48717b3 nodeName:}" failed. No retries permitted until 2024-02-08 23:52:14.479753177 +0000 UTC m=+20.912370125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/62280aaa-3dae-4a09-9ba8-5e48f48717b3-tigera-ca-bundle") pod "calico-typha-5986b89f5d-m5k26" (UID: "62280aaa-3dae-4a09-9ba8-5e48f48717b3") : failed to sync configmap cache: timed out waiting for the condition Feb 8 23:52:13.980473 kubelet[2113]: E0208 23:52:13.980461 2113 secret.go:194] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Feb 8 23:52:13.980590 kubelet[2113]: E0208 23:52:13.980579 2113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62280aaa-3dae-4a09-9ba8-5e48f48717b3-typha-certs podName:62280aaa-3dae-4a09-9ba8-5e48f48717b3 nodeName:}" failed. No retries permitted until 2024-02-08 23:52:14.480563904 +0000 UTC m=+20.913180852 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/62280aaa-3dae-4a09-9ba8-5e48f48717b3-typha-certs") pod "calico-typha-5986b89f5d-m5k26" (UID: "62280aaa-3dae-4a09-9ba8-5e48f48717b3") : failed to sync secret cache: timed out waiting for the condition Feb 8 23:52:14.028094 kubelet[2113]: E0208 23:52:14.028073 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.028263 kubelet[2113]: W0208 23:52:14.028246 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.028367 kubelet[2113]: E0208 23:52:14.028354 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.028602 kubelet[2113]: E0208 23:52:14.028592 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.028681 kubelet[2113]: W0208 23:52:14.028670 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.028744 kubelet[2113]: E0208 23:52:14.028736 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.029007 kubelet[2113]: E0208 23:52:14.028992 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.029112 kubelet[2113]: W0208 23:52:14.029096 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.029227 kubelet[2113]: E0208 23:52:14.029201 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.130174 kubelet[2113]: E0208 23:52:14.130099 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.130346 kubelet[2113]: W0208 23:52:14.130331 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.130430 kubelet[2113]: E0208 23:52:14.130419 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.130649 kubelet[2113]: E0208 23:52:14.130639 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.130725 kubelet[2113]: W0208 23:52:14.130713 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.130793 kubelet[2113]: E0208 23:52:14.130784 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.131048 kubelet[2113]: E0208 23:52:14.131032 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.131158 kubelet[2113]: W0208 23:52:14.131141 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.131254 kubelet[2113]: E0208 23:52:14.131240 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.232486 kubelet[2113]: E0208 23:52:14.232430 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.232486 kubelet[2113]: W0208 23:52:14.232478 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.232731 kubelet[2113]: E0208 23:52:14.232501 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.232731 kubelet[2113]: E0208 23:52:14.232675 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.232731 kubelet[2113]: W0208 23:52:14.232684 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.232731 kubelet[2113]: E0208 23:52:14.232696 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.233191 kubelet[2113]: E0208 23:52:14.233166 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.233191 kubelet[2113]: W0208 23:52:14.233180 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.233191 kubelet[2113]: E0208 23:52:14.233193 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.286819 kubelet[2113]: E0208 23:52:14.286799 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.286975 kubelet[2113]: W0208 23:52:14.286961 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.287047 kubelet[2113]: E0208 23:52:14.287038 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.334368 kubelet[2113]: E0208 23:52:14.334339 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.334368 kubelet[2113]: W0208 23:52:14.334366 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.334532 kubelet[2113]: E0208 23:52:14.334395 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.335283 kubelet[2113]: E0208 23:52:14.335257 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.335283 kubelet[2113]: W0208 23:52:14.335280 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.335395 kubelet[2113]: E0208 23:52:14.335327 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.436373 kubelet[2113]: E0208 23:52:14.436269 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.436373 kubelet[2113]: W0208 23:52:14.436290 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.436813 kubelet[2113]: E0208 23:52:14.436793 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.437797 kubelet[2113]: E0208 23:52:14.437746 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.437797 kubelet[2113]: W0208 23:52:14.437773 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.437797 kubelet[2113]: E0208 23:52:14.437789 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.517872 env[1139]: time="2024-02-08T23:52:14.517509328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ts67l,Uid:13103037-43bc-416d-a96b-e9fd1f75ed85,Namespace:calico-system,Attempt:0,}" Feb 8 23:52:14.539114 kubelet[2113]: E0208 23:52:14.539087 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.539114 kubelet[2113]: W0208 23:52:14.539110 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.539284 kubelet[2113]: E0208 23:52:14.539133 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.539637 kubelet[2113]: E0208 23:52:14.539616 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.539716 kubelet[2113]: W0208 23:52:14.539630 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.539716 kubelet[2113]: E0208 23:52:14.539702 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.540352 kubelet[2113]: E0208 23:52:14.540327 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.540352 kubelet[2113]: W0208 23:52:14.540342 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.540352 kubelet[2113]: E0208 23:52:14.540362 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.540712 kubelet[2113]: E0208 23:52:14.540694 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.540712 kubelet[2113]: W0208 23:52:14.540708 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.540809 kubelet[2113]: E0208 23:52:14.540790 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.540982 kubelet[2113]: E0208 23:52:14.540952 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.541035 kubelet[2113]: W0208 23:52:14.540977 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.541035 kubelet[2113]: E0208 23:52:14.541013 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.541247 kubelet[2113]: E0208 23:52:14.541228 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.541330 kubelet[2113]: W0208 23:52:14.541252 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.541330 kubelet[2113]: E0208 23:52:14.541272 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.541571 kubelet[2113]: E0208 23:52:14.541554 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.541571 kubelet[2113]: W0208 23:52:14.541567 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.541668 kubelet[2113]: E0208 23:52:14.541583 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.541818 kubelet[2113]: E0208 23:52:14.541799 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.541818 kubelet[2113]: W0208 23:52:14.541813 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.541898 kubelet[2113]: E0208 23:52:14.541842 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.542061 kubelet[2113]: E0208 23:52:14.542045 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.542061 kubelet[2113]: W0208 23:52:14.542057 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.542144 kubelet[2113]: E0208 23:52:14.542069 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.542327 kubelet[2113]: E0208 23:52:14.542287 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.542327 kubelet[2113]: W0208 23:52:14.542325 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.542413 kubelet[2113]: E0208 23:52:14.542338 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.543387 kubelet[2113]: E0208 23:52:14.543371 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.543387 kubelet[2113]: W0208 23:52:14.543385 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.543547 kubelet[2113]: E0208 23:52:14.543399 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.558848 env[1139]: time="2024-02-08T23:52:14.558775903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:52:14.559015 env[1139]: time="2024-02-08T23:52:14.558857300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:52:14.559015 env[1139]: time="2024-02-08T23:52:14.558888319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:52:14.559084 env[1139]: time="2024-02-08T23:52:14.559035412Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3064daba5afb5c965f3b77f0b2084b0576b1d7d104954a0433fadfd9e84abc7 pid=2679 runtime=io.containerd.runc.v2 Feb 8 23:52:14.635666 env[1139]: time="2024-02-08T23:52:14.635593232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ts67l,Uid:13103037-43bc-416d-a96b-e9fd1f75ed85,Namespace:calico-system,Attempt:0,} returns sandbox id \"a3064daba5afb5c965f3b77f0b2084b0576b1d7d104954a0433fadfd9e84abc7\"" Feb 8 23:52:14.639223 env[1139]: time="2024-02-08T23:52:14.639194270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 8 23:52:14.642012 kubelet[2113]: E0208 23:52:14.641880 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.642012 kubelet[2113]: W0208 23:52:14.641942 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.642012 kubelet[2113]: E0208 23:52:14.641970 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.670405 kubelet[2113]: E0208 23:52:14.670363 2113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:52:14.670557 kubelet[2113]: W0208 23:52:14.670543 2113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:52:14.670658 kubelet[2113]: E0208 23:52:14.670647 2113 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:52:14.936976 kubelet[2113]: E0208 23:52:14.936840 2113 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp9fc" podUID=c90b1627-bdd7-4b0e-9b33-829b081056fe Feb 8 23:52:14.967203 env[1139]: time="2024-02-08T23:52:14.967102526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5986b89f5d-m5k26,Uid:62280aaa-3dae-4a09-9ba8-5e48f48717b3,Namespace:calico-system,Attempt:0,}" Feb 8 23:52:15.619813 env[1139]: time="2024-02-08T23:52:15.619004066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:52:15.619813 env[1139]: time="2024-02-08T23:52:15.619096263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:52:15.619813 env[1139]: time="2024-02-08T23:52:15.619128775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:52:15.634916 env[1139]: time="2024-02-08T23:52:15.621231703Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe8d304453991fd3d1aa6df826b682ed8634c779e31034dcf1bf6325259c1553 pid=2730 runtime=io.containerd.runc.v2 Feb 8 23:52:15.657432 systemd[1]: run-containerd-runc-k8s.io-fe8d304453991fd3d1aa6df826b682ed8634c779e31034dcf1bf6325259c1553-runc.4SFVlb.mount: Deactivated successfully. Feb 8 23:52:15.725186 env[1139]: time="2024-02-08T23:52:15.725116314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5986b89f5d-m5k26,Uid:62280aaa-3dae-4a09-9ba8-5e48f48717b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"fe8d304453991fd3d1aa6df826b682ed8634c779e31034dcf1bf6325259c1553\"" Feb 8 23:52:16.834822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073900778.mount: Deactivated successfully. Feb 8 23:52:16.937521 kubelet[2113]: E0208 23:52:16.937158 2113 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp9fc" podUID=c90b1627-bdd7-4b0e-9b33-829b081056fe Feb 8 23:52:18.209533 env[1139]: time="2024-02-08T23:52:18.209480426Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:18.212909 env[1139]: time="2024-02-08T23:52:18.212864487Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:18.216595 env[1139]: time="2024-02-08T23:52:18.216554023Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:18.220926 env[1139]: time="2024-02-08T23:52:18.220876515Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:18.222195 env[1139]: time="2024-02-08T23:52:18.222171017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 8 23:52:18.223440 env[1139]: time="2024-02-08T23:52:18.223418209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 8 23:52:18.227056 env[1139]: time="2024-02-08T23:52:18.227012313Z" level=info msg="CreateContainer within sandbox \"a3064daba5afb5c965f3b77f0b2084b0576b1d7d104954a0433fadfd9e84abc7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 8 23:52:18.243966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1767185368.mount: Deactivated successfully. Feb 8 23:52:18.252417 env[1139]: time="2024-02-08T23:52:18.252381091Z" level=info msg="CreateContainer within sandbox \"a3064daba5afb5c965f3b77f0b2084b0576b1d7d104954a0433fadfd9e84abc7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0d88605ecfe1c940d4ed4050751749644ebf0f53749e2084a9e67299a60516d1\"" Feb 8 23:52:18.255018 env[1139]: time="2024-02-08T23:52:18.254990045Z" level=info msg="StartContainer for \"0d88605ecfe1c940d4ed4050751749644ebf0f53749e2084a9e67299a60516d1\"" Feb 8 23:52:18.335967 env[1139]: time="2024-02-08T23:52:18.335909932Z" level=info msg="StartContainer for \"0d88605ecfe1c940d4ed4050751749644ebf0f53749e2084a9e67299a60516d1\" returns successfully" Feb 8 23:52:18.498700 env[1139]: time="2024-02-08T23:52:18.498514330Z" level=info msg="shim disconnected" id=0d88605ecfe1c940d4ed4050751749644ebf0f53749e2084a9e67299a60516d1 Feb 8 23:52:18.500219 env[1139]: time="2024-02-08T23:52:18.500178853Z" level=warning msg="cleaning up after shim disconnected" id=0d88605ecfe1c940d4ed4050751749644ebf0f53749e2084a9e67299a60516d1 namespace=k8s.io Feb 8 23:52:18.501171 env[1139]: time="2024-02-08T23:52:18.501138524Z" level=info msg="cleaning up dead shim" Feb 8 23:52:18.531954 env[1139]: time="2024-02-08T23:52:18.531872232Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:52:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2813 runtime=io.containerd.runc.v2\n" Feb 8 23:52:18.938163 kubelet[2113]: E0208 23:52:18.937947 2113 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp9fc" podUID=c90b1627-bdd7-4b0e-9b33-829b081056fe Feb 8 23:52:19.242054 systemd[1]: run-containerd-runc-k8s.io-0d88605ecfe1c940d4ed4050751749644ebf0f53749e2084a9e67299a60516d1-runc.RuKIOp.mount: Deactivated successfully. Feb 8 23:52:19.242653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d88605ecfe1c940d4ed4050751749644ebf0f53749e2084a9e67299a60516d1-rootfs.mount: Deactivated successfully. Feb 8 23:52:19.960406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount742569537.mount: Deactivated successfully. Feb 8 23:52:20.938944 kubelet[2113]: E0208 23:52:20.937796 2113 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp9fc" podUID=c90b1627-bdd7-4b0e-9b33-829b081056fe Feb 8 23:52:22.377095 env[1139]: time="2024-02-08T23:52:22.377002558Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:22.380741 env[1139]: time="2024-02-08T23:52:22.380711885Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:22.385867 env[1139]: time="2024-02-08T23:52:22.385836963Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:22.391072 env[1139]: time="2024-02-08T23:52:22.391040902Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:22.393972 env[1139]: time="2024-02-08T23:52:22.393942140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c\"" Feb 8 23:52:22.398289 env[1139]: time="2024-02-08T23:52:22.397885143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 8 23:52:22.421361 env[1139]: time="2024-02-08T23:52:22.419647754Z" level=info msg="CreateContainer within sandbox \"fe8d304453991fd3d1aa6df826b682ed8634c779e31034dcf1bf6325259c1553\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 8 23:52:22.436335 env[1139]: time="2024-02-08T23:52:22.436274065Z" level=info msg="CreateContainer within sandbox \"fe8d304453991fd3d1aa6df826b682ed8634c779e31034dcf1bf6325259c1553\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fc15f4b7d908573b0f03c1d87e239b3ecd5be3329a24361b590a6754821f02bc\"" Feb 8 23:52:22.437089 env[1139]: time="2024-02-08T23:52:22.437066252Z" level=info msg="StartContainer for \"fc15f4b7d908573b0f03c1d87e239b3ecd5be3329a24361b590a6754821f02bc\"" Feb 8 23:52:22.547553 env[1139]: time="2024-02-08T23:52:22.547494599Z" level=info msg="StartContainer for \"fc15f4b7d908573b0f03c1d87e239b3ecd5be3329a24361b590a6754821f02bc\" returns successfully" Feb 8 23:52:22.937219 kubelet[2113]: E0208 23:52:22.937171 2113 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp9fc" podUID=c90b1627-bdd7-4b0e-9b33-829b081056fe Feb 8 23:52:23.061333 kubelet[2113]: I0208 23:52:23.061235 2113 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5986b89f5d-m5k26" podStartSLOduration=-9.223372025793623e+09 pod.CreationTimestamp="2024-02-08 23:52:12 +0000 UTC" firstStartedPulling="2024-02-08 23:52:15.726649959 +0000 UTC m=+22.159266897" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:52:23.06004963 +0000 UTC m=+29.492666628" watchObservedRunningTime="2024-02-08 23:52:23.061152732 +0000 UTC m=+29.493769720" Feb 8 23:52:24.042537 kubelet[2113]: I0208 23:52:24.042453 2113 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 8 23:52:24.950775 kubelet[2113]: E0208 23:52:24.950734 2113 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp9fc" podUID=c90b1627-bdd7-4b0e-9b33-829b081056fe Feb 8 23:52:24.951085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount390556586.mount: Deactivated successfully. Feb 8 23:52:26.050351 kubelet[2113]: I0208 23:52:26.049831 2113 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 8 23:52:26.192000 audit[2898]: NETFILTER_CFG table=filter:109 family=2 entries=13 op=nft_register_rule pid=2898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:26.194282 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 8 23:52:26.194350 kernel: audit: type=1325 audit(1707436346.192:285): table=filter:109 family=2 entries=13 op=nft_register_rule pid=2898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:26.192000 audit[2898]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffcb1e2fe00 a2=0 a3=7ffcb1e2fdec items=0 ppid=2271 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:26.202589 kernel: audit: type=1300 audit(1707436346.192:285): arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffcb1e2fe00 a2=0 a3=7ffcb1e2fdec items=0 ppid=2271 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:26.192000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:26.209313 kernel: audit: type=1327 audit(1707436346.192:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:26.194000 audit[2898]: NETFILTER_CFG table=nat:110 family=2 entries=27 op=nft_register_chain pid=2898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:26.214311 kernel: audit: type=1325 audit(1707436346.194:286): table=nat:110 family=2 entries=27 op=nft_register_chain pid=2898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:26.194000 audit[2898]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffcb1e2fe00 a2=0 a3=7ffcb1e2fdec items=0 ppid=2271 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:26.194000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:26.224414 kernel: audit: type=1300 audit(1707436346.194:286): arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffcb1e2fe00 a2=0 a3=7ffcb1e2fdec items=0 ppid=2271 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:26.224476 kernel: audit: type=1327 audit(1707436346.194:286): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:26.937647 kubelet[2113]: E0208 23:52:26.937591 2113 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp9fc" podUID=c90b1627-bdd7-4b0e-9b33-829b081056fe Feb 8 23:52:28.940426 kubelet[2113]: E0208 23:52:28.940261 2113 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp9fc" podUID=c90b1627-bdd7-4b0e-9b33-829b081056fe Feb 8 23:52:30.937228 kubelet[2113]: E0208 23:52:30.937178 2113 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp9fc" podUID=c90b1627-bdd7-4b0e-9b33-829b081056fe Feb 8 23:52:31.345827 env[1139]: time="2024-02-08T23:52:31.345735667Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:31.351370 env[1139]: time="2024-02-08T23:52:31.351268587Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:31.361496 env[1139]: time="2024-02-08T23:52:31.361432943Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:31.364881 env[1139]: time="2024-02-08T23:52:31.364597424Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:31.367154 env[1139]: time="2024-02-08T23:52:31.367097364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 8 23:52:31.379449 env[1139]: time="2024-02-08T23:52:31.379351738Z" level=info msg="CreateContainer within sandbox \"a3064daba5afb5c965f3b77f0b2084b0576b1d7d104954a0433fadfd9e84abc7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 8 23:52:31.409873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3070676182.mount: Deactivated successfully. Feb 8 23:52:31.483797 env[1139]: time="2024-02-08T23:52:31.483703038Z" level=info msg="CreateContainer within sandbox \"a3064daba5afb5c965f3b77f0b2084b0576b1d7d104954a0433fadfd9e84abc7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e0554e9aea1854a150c2633fbdf27a57f1c5bce42dde2b03feb2e5aff62f813d\"" Feb 8 23:52:31.485230 env[1139]: time="2024-02-08T23:52:31.485149976Z" level=info msg="StartContainer for \"e0554e9aea1854a150c2633fbdf27a57f1c5bce42dde2b03feb2e5aff62f813d\"" Feb 8 23:52:31.562001 systemd[1]: run-containerd-runc-k8s.io-e0554e9aea1854a150c2633fbdf27a57f1c5bce42dde2b03feb2e5aff62f813d-runc.kGUauN.mount: Deactivated successfully. Feb 8 23:52:31.677665 env[1139]: time="2024-02-08T23:52:31.677506343Z" level=info msg="StartContainer for \"e0554e9aea1854a150c2633fbdf27a57f1c5bce42dde2b03feb2e5aff62f813d\" returns successfully" Feb 8 23:52:32.937707 kubelet[2113]: E0208 23:52:32.937651 2113 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp9fc" podUID=c90b1627-bdd7-4b0e-9b33-829b081056fe Feb 8 23:52:34.937147 kubelet[2113]: E0208 23:52:34.937080 2113 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cp9fc" podUID=c90b1627-bdd7-4b0e-9b33-829b081056fe Feb 8 23:52:35.987771 env[1139]: time="2024-02-08T23:52:35.987574682Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:52:36.028361 kubelet[2113]: I0208 23:52:36.026743 2113 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:52:36.059365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0554e9aea1854a150c2633fbdf27a57f1c5bce42dde2b03feb2e5aff62f813d-rootfs.mount: Deactivated successfully. Feb 8 23:52:36.079719 kubelet[2113]: I0208 23:52:36.079691 2113 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:52:36.085939 kubelet[2113]: I0208 23:52:36.085916 2113 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:52:36.086240 kubelet[2113]: I0208 23:52:36.086226 2113 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:52:36.089550 env[1139]: time="2024-02-08T23:52:36.089495515Z" level=info msg="shim disconnected" id=e0554e9aea1854a150c2633fbdf27a57f1c5bce42dde2b03feb2e5aff62f813d Feb 8 23:52:36.089550 env[1139]: time="2024-02-08T23:52:36.089547270Z" level=warning msg="cleaning up after shim disconnected" id=e0554e9aea1854a150c2633fbdf27a57f1c5bce42dde2b03feb2e5aff62f813d namespace=k8s.io Feb 8 23:52:36.089732 env[1139]: time="2024-02-08T23:52:36.089560585Z" level=info msg="cleaning up dead shim" Feb 8 23:52:36.092420 kubelet[2113]: W0208 23:52:36.092400 2113 reflector.go:424] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510-3-2-a-bd3a159777.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-a-bd3a159777.novalocal' and this object Feb 8 23:52:36.092594 kubelet[2113]: E0208 23:52:36.092583 2113 reflector.go:140] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510-3-2-a-bd3a159777.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-a-bd3a159777.novalocal' and this object Feb 8 23:52:36.110515 env[1139]: time="2024-02-08T23:52:36.110410562Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:52:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2953 runtime=io.containerd.runc.v2\n" Feb 8 23:52:36.155712 kubelet[2113]: I0208 23:52:36.155678 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d775a679-abfc-4c4d-a44c-4e5893e5a899-tigera-ca-bundle\") pod \"calico-kube-controllers-78f9d567d-vn2vk\" (UID: \"d775a679-abfc-4c4d-a44c-4e5893e5a899\") " pod="calico-system/calico-kube-controllers-78f9d567d-vn2vk" Feb 8 23:52:36.155712 kubelet[2113]: I0208 23:52:36.155727 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ztfm\" (UniqueName: \"kubernetes.io/projected/3774f726-461c-4a92-8c72-de7a78a4ac63-kube-api-access-5ztfm\") pod \"coredns-787d4945fb-6c8kq\" (UID: \"3774f726-461c-4a92-8c72-de7a78a4ac63\") " pod="kube-system/coredns-787d4945fb-6c8kq" Feb 8 23:52:36.156601 kubelet[2113]: I0208 23:52:36.155754 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3774f726-461c-4a92-8c72-de7a78a4ac63-config-volume\") pod \"coredns-787d4945fb-6c8kq\" (UID: \"3774f726-461c-4a92-8c72-de7a78a4ac63\") " pod="kube-system/coredns-787d4945fb-6c8kq" Feb 8 23:52:36.156601 kubelet[2113]: I0208 23:52:36.155782 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dccf5af4-92a1-4f4c-ac0e-a30203c7f99d-config-volume\") pod \"coredns-787d4945fb-2d8cl\" (UID: \"dccf5af4-92a1-4f4c-ac0e-a30203c7f99d\") " pod="kube-system/coredns-787d4945fb-2d8cl" Feb 8 23:52:36.156601 kubelet[2113]: I0208 23:52:36.155815 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh6zm\" (UniqueName: \"kubernetes.io/projected/d775a679-abfc-4c4d-a44c-4e5893e5a899-kube-api-access-kh6zm\") pod \"calico-kube-controllers-78f9d567d-vn2vk\" (UID: \"d775a679-abfc-4c4d-a44c-4e5893e5a899\") " pod="calico-system/calico-kube-controllers-78f9d567d-vn2vk" Feb 8 23:52:36.156601 kubelet[2113]: I0208 23:52:36.155841 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkh2q\" (UniqueName: \"kubernetes.io/projected/dccf5af4-92a1-4f4c-ac0e-a30203c7f99d-kube-api-access-xkh2q\") pod \"coredns-787d4945fb-2d8cl\" (UID: \"dccf5af4-92a1-4f4c-ac0e-a30203c7f99d\") " pod="kube-system/coredns-787d4945fb-2d8cl" Feb 8 23:52:36.396718 env[1139]: time="2024-02-08T23:52:36.395988609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f9d567d-vn2vk,Uid:d775a679-abfc-4c4d-a44c-4e5893e5a899,Namespace:calico-system,Attempt:0,}" Feb 8 23:52:36.630091 env[1139]: time="2024-02-08T23:52:36.629972412Z" level=error msg="Failed to destroy network for sandbox \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:36.630581 env[1139]: time="2024-02-08T23:52:36.630552162Z" level=error msg="encountered an error cleaning up failed sandbox \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:36.631656 env[1139]: time="2024-02-08T23:52:36.630697641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f9d567d-vn2vk,Uid:d775a679-abfc-4c4d-a44c-4e5893e5a899,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:36.631768 kubelet[2113]: E0208 23:52:36.630910 2113 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:36.631768 kubelet[2113]: E0208 23:52:36.630969 2113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f9d567d-vn2vk" Feb 8 23:52:36.631768 kubelet[2113]: E0208 23:52:36.630996 2113 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f9d567d-vn2vk" Feb 8 23:52:36.631890 kubelet[2113]: E0208 23:52:36.631084 2113 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78f9d567d-vn2vk_calico-system(d775a679-abfc-4c4d-a44c-4e5893e5a899)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78f9d567d-vn2vk_calico-system(d775a679-abfc-4c4d-a44c-4e5893e5a899)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78f9d567d-vn2vk" podUID=d775a679-abfc-4c4d-a44c-4e5893e5a899 Feb 8 23:52:36.943132 env[1139]: time="2024-02-08T23:52:36.942428691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cp9fc,Uid:c90b1627-bdd7-4b0e-9b33-829b081056fe,Namespace:calico-system,Attempt:0,}" Feb 8 23:52:36.987258 env[1139]: time="2024-02-08T23:52:36.987180484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-6c8kq,Uid:3774f726-461c-4a92-8c72-de7a78a4ac63,Namespace:kube-system,Attempt:0,}" Feb 8 23:52:36.998868 env[1139]: time="2024-02-08T23:52:36.998754144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-2d8cl,Uid:dccf5af4-92a1-4f4c-ac0e-a30203c7f99d,Namespace:kube-system,Attempt:0,}" Feb 8 23:52:37.063191 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd-shm.mount: Deactivated successfully. Feb 8 23:52:37.103246 kubelet[2113]: I0208 23:52:37.103216 2113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Feb 8 23:52:37.106200 env[1139]: time="2024-02-08T23:52:37.106166139Z" level=info msg="StopPodSandbox for \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\"" Feb 8 23:52:37.106400 env[1139]: time="2024-02-08T23:52:37.106169956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 8 23:52:37.138503 env[1139]: time="2024-02-08T23:52:37.138452677Z" level=error msg="Failed to destroy network for sandbox \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:37.140833 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b-shm.mount: Deactivated successfully. Feb 8 23:52:37.141575 env[1139]: time="2024-02-08T23:52:37.141540897Z" level=error msg="encountered an error cleaning up failed sandbox \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:37.141696 env[1139]: time="2024-02-08T23:52:37.141663594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cp9fc,Uid:c90b1627-bdd7-4b0e-9b33-829b081056fe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:37.142038 kubelet[2113]: E0208 23:52:37.142007 2113 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:37.142108 kubelet[2113]: E0208 23:52:37.142080 2113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cp9fc" Feb 8 23:52:37.142146 kubelet[2113]: E0208 23:52:37.142111 2113 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cp9fc" Feb 8 23:52:37.142201 kubelet[2113]: E0208 23:52:37.142181 2113 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cp9fc_calico-system(c90b1627-bdd7-4b0e-9b33-829b081056fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cp9fc_calico-system(c90b1627-bdd7-4b0e-9b33-829b081056fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cp9fc" podUID=c90b1627-bdd7-4b0e-9b33-829b081056fe Feb 8 23:52:37.173927 env[1139]: time="2024-02-08T23:52:37.173865736Z" level=error msg="Failed to destroy network for sandbox \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:37.175988 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532-shm.mount: Deactivated successfully. Feb 8 23:52:37.177208 env[1139]: time="2024-02-08T23:52:37.177140741Z" level=error msg="encountered an error cleaning up failed sandbox \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:37.177281 env[1139]: time="2024-02-08T23:52:37.177225468Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-6c8kq,Uid:3774f726-461c-4a92-8c72-de7a78a4ac63,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:37.177487 kubelet[2113]: E0208 23:52:37.177464 2113 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:37.177569 kubelet[2113]: E0208 23:52:37.177538 2113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-6c8kq" Feb 8 23:52:37.177569 kubelet[2113]: E0208 23:52:37.177567 2113 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-6c8kq" Feb 8 23:52:37.177666 kubelet[2113]: E0208 23:52:37.177642 2113 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-6c8kq_kube-system(3774f726-461c-4a92-8c72-de7a78a4ac63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-6c8kq_kube-system(3774f726-461c-4a92-8c72-de7a78a4ac63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-6c8kq" podUID=3774f726-461c-4a92-8c72-de7a78a4ac63 Feb 8 23:52:37.188574 env[1139]: time="2024-02-08T23:52:37.188496709Z" level=error msg="StopPodSandbox for \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\" failed" error="failed to destroy network for sandbox \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:37.188909 kubelet[2113]: E0208 23:52:37.188851 2113 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Feb 8 23:52:37.188983 kubelet[2113]: E0208 23:52:37.188967 2113 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd} Feb 8 23:52:37.189037 kubelet[2113]: E0208 23:52:37.189016 2113 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d775a679-abfc-4c4d-a44c-4e5893e5a899\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 8 23:52:37.189139 kubelet[2113]: E0208 23:52:37.189088 2113 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d775a679-abfc-4c4d-a44c-4e5893e5a899\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78f9d567d-vn2vk" podUID=d775a679-abfc-4c4d-a44c-4e5893e5a899 Feb 8 23:52:37.207117 env[1139]: time="2024-02-08T23:52:37.207005572Z" level=error msg="Failed to destroy network for sandbox \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:37.209979 env[1139]: time="2024-02-08T23:52:37.209941931Z" level=error msg="encountered an error cleaning up failed sandbox \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:37.210125 env[1139]: time="2024-02-08T23:52:37.210091818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-2d8cl,Uid:dccf5af4-92a1-4f4c-ac0e-a30203c7f99d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:37.211599 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010-shm.mount: Deactivated successfully. Feb 8 23:52:37.212709 kubelet[2113]: E0208 23:52:37.212682 2113 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:37.212781 kubelet[2113]: E0208 23:52:37.212758 2113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-2d8cl" Feb 8 23:52:37.212819 kubelet[2113]: E0208 23:52:37.212787 2113 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-2d8cl" Feb 8 23:52:37.212923 kubelet[2113]: E0208 23:52:37.212895 2113 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-2d8cl_kube-system(dccf5af4-92a1-4f4c-ac0e-a30203c7f99d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-2d8cl_kube-system(dccf5af4-92a1-4f4c-ac0e-a30203c7f99d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-2d8cl" podUID=dccf5af4-92a1-4f4c-ac0e-a30203c7f99d Feb 8 23:52:38.110196 kubelet[2113]: I0208 23:52:38.110141 2113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Feb 8 23:52:38.111802 env[1139]: time="2024-02-08T23:52:38.111708536Z" level=info msg="StopPodSandbox for \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\"" Feb 8 23:52:38.117257 kubelet[2113]: I0208 23:52:38.116973 2113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Feb 8 23:52:38.118578 env[1139]: time="2024-02-08T23:52:38.118516635Z" level=info msg="StopPodSandbox for \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\"" Feb 8 23:52:38.124987 kubelet[2113]: I0208 23:52:38.124936 2113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Feb 8 23:52:38.126924 env[1139]: time="2024-02-08T23:52:38.126865546Z" level=info msg="StopPodSandbox for \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\"" Feb 8 23:52:38.200120 env[1139]: time="2024-02-08T23:52:38.200058848Z" level=error msg="StopPodSandbox for \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\" failed" error="failed to destroy network for sandbox \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:38.200642 kubelet[2113]: E0208 23:52:38.200476 2113 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Feb 8 23:52:38.200642 kubelet[2113]: E0208 23:52:38.200516 2113 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b} Feb 8 23:52:38.200642 kubelet[2113]: E0208 23:52:38.200559 2113 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c90b1627-bdd7-4b0e-9b33-829b081056fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 8 23:52:38.200642 kubelet[2113]: E0208 23:52:38.200610 2113 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c90b1627-bdd7-4b0e-9b33-829b081056fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cp9fc" podUID=c90b1627-bdd7-4b0e-9b33-829b081056fe Feb 8 23:52:38.208093 env[1139]: time="2024-02-08T23:52:38.208039718Z" level=error msg="StopPodSandbox for \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\" failed" error="failed to destroy network for sandbox \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:38.208848 kubelet[2113]: E0208 23:52:38.208384 2113 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Feb 8 23:52:38.208848 kubelet[2113]: E0208 23:52:38.208421 2113 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532} Feb 8 23:52:38.208848 kubelet[2113]: E0208 23:52:38.208478 2113 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3774f726-461c-4a92-8c72-de7a78a4ac63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 8 23:52:38.208848 kubelet[2113]: E0208 23:52:38.208515 2113 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3774f726-461c-4a92-8c72-de7a78a4ac63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-6c8kq" podUID=3774f726-461c-4a92-8c72-de7a78a4ac63 Feb 8 23:52:38.225136 env[1139]: time="2024-02-08T23:52:38.225087177Z" level=error msg="StopPodSandbox for \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\" failed" error="failed to destroy network for sandbox \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:38.225614 kubelet[2113]: E0208 23:52:38.225470 2113 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Feb 8 23:52:38.225614 kubelet[2113]: E0208 23:52:38.225508 2113 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010} Feb 8 23:52:38.225614 kubelet[2113]: E0208 23:52:38.225551 2113 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dccf5af4-92a1-4f4c-ac0e-a30203c7f99d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 8 23:52:38.225614 kubelet[2113]: E0208 23:52:38.225590 2113 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dccf5af4-92a1-4f4c-ac0e-a30203c7f99d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-2d8cl" podUID=dccf5af4-92a1-4f4c-ac0e-a30203c7f99d Feb 8 23:52:47.217889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1738488093.mount: Deactivated successfully. Feb 8 23:52:47.436309 env[1139]: time="2024-02-08T23:52:47.436221654Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:47.440268 env[1139]: time="2024-02-08T23:52:47.440233077Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:47.442458 env[1139]: time="2024-02-08T23:52:47.442429629Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:47.445290 env[1139]: time="2024-02-08T23:52:47.445262386Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:47.445984 env[1139]: time="2024-02-08T23:52:47.445955729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 8 23:52:47.494599 env[1139]: time="2024-02-08T23:52:47.494471671Z" level=info msg="CreateContainer within sandbox \"a3064daba5afb5c965f3b77f0b2084b0576b1d7d104954a0433fadfd9e84abc7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 8 23:52:47.508633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4218229739.mount: Deactivated successfully. Feb 8 23:52:47.530703 env[1139]: time="2024-02-08T23:52:47.530638749Z" level=info msg="CreateContainer within sandbox \"a3064daba5afb5c965f3b77f0b2084b0576b1d7d104954a0433fadfd9e84abc7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5dc66cc8bf05b6a49fa5ff2880a81eba1ba05adffc29ddc0b695b4bc47e681b8\"" Feb 8 23:52:47.533197 env[1139]: time="2024-02-08T23:52:47.533099754Z" level=info msg="StartContainer for \"5dc66cc8bf05b6a49fa5ff2880a81eba1ba05adffc29ddc0b695b4bc47e681b8\"" Feb 8 23:52:47.940795 env[1139]: time="2024-02-08T23:52:47.940731285Z" level=info msg="StopPodSandbox for \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\"" Feb 8 23:52:47.949607 env[1139]: time="2024-02-08T23:52:47.949513645Z" level=info msg="StartContainer for \"5dc66cc8bf05b6a49fa5ff2880a81eba1ba05adffc29ddc0b695b4bc47e681b8\" returns successfully" Feb 8 23:52:48.008714 env[1139]: time="2024-02-08T23:52:48.008653559Z" level=error msg="StopPodSandbox for \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\" failed" error="failed to destroy network for sandbox \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:52:48.009324 kubelet[2113]: E0208 23:52:48.009107 2113 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Feb 8 23:52:48.009324 kubelet[2113]: E0208 23:52:48.009169 2113 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd} Feb 8 23:52:48.009324 kubelet[2113]: E0208 23:52:48.009215 2113 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d775a679-abfc-4c4d-a44c-4e5893e5a899\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 8 23:52:48.009324 kubelet[2113]: E0208 23:52:48.009280 2113 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d775a679-abfc-4c4d-a44c-4e5893e5a899\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78f9d567d-vn2vk" podUID=d775a679-abfc-4c4d-a44c-4e5893e5a899 Feb 8 23:52:48.135066 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 8 23:52:48.136170 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 8 23:52:48.186983 kubelet[2113]: I0208 23:52:48.186936 2113 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-ts67l" podStartSLOduration=-9.2233720016727e+09 pod.CreationTimestamp="2024-02-08 23:52:13 +0000 UTC" firstStartedPulling="2024-02-08 23:52:14.638804962 +0000 UTC m=+21.071421900" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:52:48.170352276 +0000 UTC m=+54.602969214" watchObservedRunningTime="2024-02-08 23:52:48.182075028 +0000 UTC m=+54.614691966" Feb 8 23:52:49.199502 systemd[1]: run-containerd-runc-k8s.io-5dc66cc8bf05b6a49fa5ff2880a81eba1ba05adffc29ddc0b695b4bc47e681b8-runc.QC40bu.mount: Deactivated successfully. Feb 8 23:52:49.634321 kernel: audit: type=1400 audit(1707436369.629:287): avc: denied { write } for pid=3347 comm="tee" name="fd" dev="proc" ino=27715 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:52:49.629000 audit[3347]: AVC avc: denied { write } for pid=3347 comm="tee" name="fd" dev="proc" ino=27715 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:52:49.629000 audit[3347]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffef88e4961 a2=241 a3=1b6 items=1 ppid=3325 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:49.646318 kernel: audit: type=1300 audit(1707436369.629:287): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffef88e4961 a2=241 a3=1b6 items=1 ppid=3325 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:49.629000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 8 23:52:49.657937 kernel: audit: type=1307 audit(1707436369.629:287): cwd="/etc/service/enabled/confd/log" Feb 8 23:52:49.657994 kernel: audit: type=1302 audit(1707436369.629:287): item=0 name="/dev/fd/63" inode=26732 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:52:49.629000 audit: PATH item=0 name="/dev/fd/63" inode=26732 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:52:49.629000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:52:49.671322 kernel: audit: type=1327 audit(1707436369.629:287): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:52:49.636000 audit[3345]: AVC avc: denied { write } for pid=3345 comm="tee" name="fd" dev="proc" ino=26745 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:52:49.677327 kernel: audit: type=1400 audit(1707436369.636:288): avc: denied { write } for pid=3345 comm="tee" name="fd" dev="proc" ino=26745 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:52:49.636000 audit[3345]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffac020951 a2=241 a3=1b6 items=1 ppid=3320 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:49.683344 kernel: audit: type=1300 audit(1707436369.636:288): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffac020951 a2=241 a3=1b6 items=1 ppid=3320 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:49.636000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 8 23:52:49.636000 audit: PATH item=0 name="/dev/fd/63" inode=26728 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:52:49.690544 kernel: audit: type=1307 audit(1707436369.636:288): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 8 23:52:49.691190 kernel: audit: type=1302 audit(1707436369.636:288): item=0 name="/dev/fd/63" inode=26728 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:52:49.636000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:52:49.647000 audit[3359]: AVC avc: denied { write } for pid=3359 comm="tee" name="fd" dev="proc" ino=27719 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:52:49.647000 audit[3359]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe2f3c6962 a2=241 a3=1b6 items=1 ppid=3318 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:49.647000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 8 23:52:49.695341 kernel: audit: type=1327 audit(1707436369.636:288): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:52:49.647000 audit: PATH item=0 name="/dev/fd/63" inode=26739 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:52:49.647000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:52:49.649000 audit[3351]: AVC avc: denied { write } for pid=3351 comm="tee" name="fd" dev="proc" ino=26764 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:52:49.649000 audit[3351]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdb0cfe961 a2=241 a3=1b6 items=1 ppid=3322 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:49.649000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 8 23:52:49.649000 audit: PATH item=0 name="/dev/fd/63" inode=26733 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:52:49.649000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:52:49.681000 audit[3378]: AVC avc: denied { write } for pid=3378 comm="tee" name="fd" dev="proc" ino=27729 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:52:49.681000 audit[3378]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdfdd50952 a2=241 a3=1b6 items=1 ppid=3315 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:49.681000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 8 23:52:49.681000 audit: PATH item=0 name="/dev/fd/63" inode=26763 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:52:49.681000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:52:49.699000 audit[3372]: AVC avc: denied { write } for pid=3372 comm="tee" name="fd" dev="proc" ino=26778 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:52:49.699000 audit[3372]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffef2688961 a2=241 a3=1b6 items=1 ppid=3330 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:49.699000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 8 23:52:49.699000 audit: PATH item=0 name="/dev/fd/63" inode=27716 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:52:49.699000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:52:49.709000 audit[3387]: AVC avc: denied { write } for pid=3387 comm="tee" name="fd" dev="proc" ino=27736 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:52:49.709000 audit[3387]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffed2cd8963 a2=241 a3=1b6 items=1 ppid=3312 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:49.709000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 8 23:52:49.709000 audit: PATH item=0 name="/dev/fd/63" inode=27733 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:52:49.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:52:49.939008 env[1139]: time="2024-02-08T23:52:49.938910228Z" level=info msg="StopPodSandbox for \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\"" Feb 8 23:52:50.168000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.168000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.168000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.168000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.168000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.168000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.168000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.168000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.168000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.168000 audit: BPF prog-id=10 op=LOAD Feb 8 23:52:50.168000 audit[3476]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc36427510 a2=70 a3=7f982437b000 items=0 ppid=3324 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:50.168000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:52:50.169000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit: BPF prog-id=11 op=LOAD Feb 8 23:52:50.169000 audit[3476]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc36427510 a2=70 a3=6e items=0 ppid=3324 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:50.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:52:50.169000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc364274c0 a2=70 a3=470860 items=0 ppid=3324 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:50.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit: BPF prog-id=12 op=LOAD Feb 8 23:52:50.169000 audit[3476]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc364274a0 a2=70 a3=7ffc36427510 items=0 ppid=3324 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:50.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:52:50.169000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:52:50.169000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.169000 audit[3476]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc36427580 a2=70 a3=0 items=0 ppid=3324 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:50.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:52:50.170000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.170000 audit[3476]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc36427570 a2=70 a3=0 items=0 ppid=3324 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:50.170000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:52:50.170000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.170000 audit[3476]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffc364275b0 a2=70 a3=fe00 items=0 ppid=3324 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:50.170000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:52:50.170000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.170000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.170000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.170000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.170000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.170000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.170000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.170000 audit[3476]: AVC avc: denied { perfmon } for pid=3476 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.170000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.170000 audit[3476]: AVC avc: denied { bpf } for pid=3476 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.170000 audit: BPF prog-id=13 op=LOAD Feb 8 23:52:50.170000 audit[3476]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc364274d0 a2=70 a3=ffffffff items=0 ppid=3324 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:50.170000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:52:50.179000 audit[3479]: AVC avc: denied { bpf } for pid=3479 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.179000 audit[3479]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffbd45b3d0 a2=70 a3=ffff items=0 ppid=3324 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:50.179000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 8 23:52:50.179000 audit[3479]: AVC avc: denied { bpf } for pid=3479 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:52:50.179000 audit[3479]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffbd45b2a0 a2=70 a3=3 items=0 ppid=3324 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:50.179000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 8 23:52:50.187000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:52:50.273000 audit[3503]: NETFILTER_CFG table=mangle:111 family=2 entries=19 op=nft_register_chain pid=3503 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:52:50.273000 audit[3503]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffc670366c0 a2=0 a3=7ffc670366ac items=0 ppid=3324 pid=3503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:50.273000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:52:50.275000 audit[3504]: NETFILTER_CFG table=nat:112 family=2 entries=16 op=nft_register_chain pid=3504 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:52:50.275000 audit[3504]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffd371b0020 a2=0 a3=0 items=0 ppid=3324 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:50.275000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:52:50.282000 audit[3502]: NETFILTER_CFG table=raw:113 family=2 entries=19 op=nft_register_chain pid=3502 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:52:50.282000 audit[3502]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffc85a550d0 a2=0 a3=0 items=0 ppid=3324 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:50.282000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:52:50.284000 audit[3506]: NETFILTER_CFG table=filter:114 family=2 entries=39 op=nft_register_chain pid=3506 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:52:50.284000 audit[3506]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7fff3ed9cab0 a2=0 a3=0 items=0 ppid=3324 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:50.284000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:52:50.404489 env[1139]: 2024-02-08 23:52:50.054 [INFO][3415] k8s.go 578: Cleaning up netns ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Feb 8 23:52:50.404489 env[1139]: 2024-02-08 23:52:50.054 [INFO][3415] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" iface="eth0" netns="/var/run/netns/cni-f513a0c6-d661-3ec2-9404-83b9a27923dc" Feb 8 23:52:50.404489 env[1139]: 2024-02-08 23:52:50.055 [INFO][3415] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" iface="eth0" netns="/var/run/netns/cni-f513a0c6-d661-3ec2-9404-83b9a27923dc" Feb 8 23:52:50.404489 env[1139]: 2024-02-08 23:52:50.057 [INFO][3415] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" iface="eth0" netns="/var/run/netns/cni-f513a0c6-d661-3ec2-9404-83b9a27923dc" Feb 8 23:52:50.404489 env[1139]: 2024-02-08 23:52:50.057 [INFO][3415] k8s.go 585: Releasing IP address(es) ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Feb 8 23:52:50.404489 env[1139]: 2024-02-08 23:52:50.057 [INFO][3415] utils.go 188: Calico CNI releasing IP address ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Feb 8 23:52:50.404489 env[1139]: 2024-02-08 23:52:50.369 [INFO][3451] ipam_plugin.go 415: Releasing address using handleID ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" HandleID="k8s-pod-network.e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" Feb 8 23:52:50.404489 env[1139]: 2024-02-08 23:52:50.371 [INFO][3451] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:52:50.404489 env[1139]: 2024-02-08 23:52:50.371 [INFO][3451] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:52:50.404489 env[1139]: 2024-02-08 23:52:50.393 [WARNING][3451] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" HandleID="k8s-pod-network.e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" Feb 8 23:52:50.404489 env[1139]: 2024-02-08 23:52:50.393 [INFO][3451] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" HandleID="k8s-pod-network.e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" Feb 8 23:52:50.404489 env[1139]: 2024-02-08 23:52:50.396 [INFO][3451] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:52:50.404489 env[1139]: 2024-02-08 23:52:50.399 [INFO][3415] k8s.go 591: Teardown processing complete. ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Feb 8 23:52:50.412896 env[1139]: time="2024-02-08T23:52:50.407825959Z" level=info msg="TearDown network for sandbox \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\" successfully" Feb 8 23:52:50.412896 env[1139]: time="2024-02-08T23:52:50.407859211Z" level=info msg="StopPodSandbox for \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\" returns successfully" Feb 8 23:52:50.406878 systemd[1]: run-netns-cni\x2df513a0c6\x2dd661\x2d3ec2\x2d9404\x2d83b9a27923dc.mount: Deactivated successfully. Feb 8 23:52:50.414071 env[1139]: time="2024-02-08T23:52:50.414001284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cp9fc,Uid:c90b1627-bdd7-4b0e-9b33-829b081056fe,Namespace:calico-system,Attempt:1,}" Feb 8 23:52:50.637335 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calicd8ac129c5b: link becomes ready Feb 8 23:52:50.637414 systemd-networkd[1029]: calicd8ac129c5b: Link UP Feb 8 23:52:50.637581 systemd-networkd[1029]: calicd8ac129c5b: Gained carrier Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.524 [INFO][3513] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0 csi-node-driver- calico-system c90b1627-bdd7-4b0e-9b33-829b081056fe 711 0 2024-02-08 23:52:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3510-3-2-a-bd3a159777.novalocal csi-node-driver-cp9fc eth0 default [] [] [kns.calico-system ksa.calico-system.default] calicd8ac129c5b [] []}} ContainerID="8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" Namespace="calico-system" Pod="csi-node-driver-cp9fc" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-" Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.525 [INFO][3513] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" Namespace="calico-system" Pod="csi-node-driver-cp9fc" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.578 [INFO][3526] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" HandleID="k8s-pod-network.8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.591 [INFO][3526] ipam_plugin.go 268: Auto assigning IP ContainerID="8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" HandleID="k8s-pod-network.8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c2a60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-2-a-bd3a159777.novalocal", "pod":"csi-node-driver-cp9fc", "timestamp":"2024-02-08 23:52:50.578742018 +0000 UTC"}, Hostname:"ci-3510-3-2-a-bd3a159777.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.592 [INFO][3526] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.592 [INFO][3526] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.592 [INFO][3526] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-2-a-bd3a159777.novalocal' Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.598 [INFO][3526] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.607 [INFO][3526] ipam.go 372: Looking up existing affinities for host host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.613 [INFO][3526] ipam.go 489: Trying affinity for 192.168.52.64/26 host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.615 [INFO][3526] ipam.go 155: Attempting to load block cidr=192.168.52.64/26 host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.618 [INFO][3526] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.64/26 host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.618 [INFO][3526] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.64/26 handle="k8s-pod-network.8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.620 [INFO][3526] ipam.go 1682: Creating new handle: k8s-pod-network.8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2 Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.624 [INFO][3526] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.64/26 handle="k8s-pod-network.8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.630 [INFO][3526] ipam.go 1216: Successfully claimed IPs: [192.168.52.65/26] block=192.168.52.64/26 handle="k8s-pod-network.8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.630 [INFO][3526] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.65/26] handle="k8s-pod-network.8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.630 [INFO][3526] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:52:50.658485 env[1139]: 2024-02-08 23:52:50.630 [INFO][3526] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.52.65/26] IPv6=[] ContainerID="8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" HandleID="k8s-pod-network.8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" Feb 8 23:52:50.659111 env[1139]: 2024-02-08 23:52:50.633 [INFO][3513] k8s.go 385: Populated endpoint ContainerID="8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" Namespace="calico-system" Pod="csi-node-driver-cp9fc" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c90b1627-bdd7-4b0e-9b33-829b081056fe", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 52, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"", Pod:"csi-node-driver-cp9fc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calicd8ac129c5b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:52:50.659111 env[1139]: 2024-02-08 23:52:50.633 [INFO][3513] k8s.go 386: Calico CNI using IPs: [192.168.52.65/32] ContainerID="8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" Namespace="calico-system" Pod="csi-node-driver-cp9fc" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" Feb 8 23:52:50.659111 env[1139]: 2024-02-08 23:52:50.633 [INFO][3513] dataplane_linux.go 68: Setting the host side veth name to calicd8ac129c5b ContainerID="8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" Namespace="calico-system" Pod="csi-node-driver-cp9fc" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" Feb 8 23:52:50.659111 env[1139]: 2024-02-08 23:52:50.638 [INFO][3513] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" Namespace="calico-system" Pod="csi-node-driver-cp9fc" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" Feb 8 23:52:50.659111 env[1139]: 2024-02-08 23:52:50.638 [INFO][3513] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" Namespace="calico-system" Pod="csi-node-driver-cp9fc" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c90b1627-bdd7-4b0e-9b33-829b081056fe", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 52, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2", Pod:"csi-node-driver-cp9fc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calicd8ac129c5b", MAC:"7e:5a:1b:c3:5e:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:52:50.659111 env[1139]: 2024-02-08 23:52:50.651 [INFO][3513] k8s.go 491: Wrote updated endpoint to datastore ContainerID="8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2" Namespace="calico-system" Pod="csi-node-driver-cp9fc" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" Feb 8 23:52:50.672586 env[1139]: time="2024-02-08T23:52:50.672520541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:52:50.672727 env[1139]: time="2024-02-08T23:52:50.672588047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:52:50.672727 env[1139]: time="2024-02-08T23:52:50.672603686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:52:50.673193 env[1139]: time="2024-02-08T23:52:50.673136821Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2 pid=3550 runtime=io.containerd.runc.v2 Feb 8 23:52:50.722000 audit[3582]: NETFILTER_CFG table=filter:115 family=2 entries=36 op=nft_register_chain pid=3582 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:52:50.722000 audit[3582]: SYSCALL arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7fff8eb397b0 a2=0 a3=7fff8eb3979c items=0 ppid=3324 pid=3582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:50.722000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:52:50.734171 env[1139]: time="2024-02-08T23:52:50.734133362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cp9fc,Uid:c90b1627-bdd7-4b0e-9b33-829b081056fe,Namespace:calico-system,Attempt:1,} returns sandbox id \"8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2\"" Feb 8 23:52:50.738182 env[1139]: time="2024-02-08T23:52:50.738152413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 8 23:52:50.941371 env[1139]: time="2024-02-08T23:52:50.938076000Z" level=info msg="StopPodSandbox for \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\"" Feb 8 23:52:51.032847 systemd-networkd[1029]: vxlan.calico: Link UP Feb 8 23:52:51.032864 systemd-networkd[1029]: vxlan.calico: Gained carrier Feb 8 23:52:51.105455 env[1139]: 2024-02-08 23:52:51.069 [INFO][3602] k8s.go 578: Cleaning up netns ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Feb 8 23:52:51.105455 env[1139]: 2024-02-08 23:52:51.069 [INFO][3602] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" iface="eth0" netns="/var/run/netns/cni-0a5cd523-9125-e6cb-4019-c00cf4e5a85b" Feb 8 23:52:51.105455 env[1139]: 2024-02-08 23:52:51.070 [INFO][3602] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" iface="eth0" netns="/var/run/netns/cni-0a5cd523-9125-e6cb-4019-c00cf4e5a85b" Feb 8 23:52:51.105455 env[1139]: 2024-02-08 23:52:51.070 [INFO][3602] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" iface="eth0" netns="/var/run/netns/cni-0a5cd523-9125-e6cb-4019-c00cf4e5a85b" Feb 8 23:52:51.105455 env[1139]: 2024-02-08 23:52:51.070 [INFO][3602] k8s.go 585: Releasing IP address(es) ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Feb 8 23:52:51.105455 env[1139]: 2024-02-08 23:52:51.070 [INFO][3602] utils.go 188: Calico CNI releasing IP address ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Feb 8 23:52:51.105455 env[1139]: 2024-02-08 23:52:51.092 [INFO][3610] ipam_plugin.go 415: Releasing address using handleID ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" HandleID="k8s-pod-network.7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" Feb 8 23:52:51.105455 env[1139]: 2024-02-08 23:52:51.092 [INFO][3610] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:52:51.105455 env[1139]: 2024-02-08 23:52:51.092 [INFO][3610] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:52:51.105455 env[1139]: 2024-02-08 23:52:51.100 [WARNING][3610] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" HandleID="k8s-pod-network.7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" Feb 8 23:52:51.105455 env[1139]: 2024-02-08 23:52:51.100 [INFO][3610] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" HandleID="k8s-pod-network.7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" Feb 8 23:52:51.105455 env[1139]: 2024-02-08 23:52:51.102 [INFO][3610] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:52:51.105455 env[1139]: 2024-02-08 23:52:51.104 [INFO][3602] k8s.go 591: Teardown processing complete. ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Feb 8 23:52:51.106089 env[1139]: time="2024-02-08T23:52:51.106058665Z" level=info msg="TearDown network for sandbox \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\" successfully" Feb 8 23:52:51.106644 env[1139]: time="2024-02-08T23:52:51.106179190Z" level=info msg="StopPodSandbox for \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\" returns successfully" Feb 8 23:52:51.106877 env[1139]: time="2024-02-08T23:52:51.106854211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-2d8cl,Uid:dccf5af4-92a1-4f4c-ac0e-a30203c7f99d,Namespace:kube-system,Attempt:1,}" Feb 8 23:52:51.279909 systemd-networkd[1029]: cali9ae67724474: Link UP Feb 8 23:52:51.286990 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:52:51.287085 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9ae67724474: link becomes ready Feb 8 23:52:51.291088 systemd-networkd[1029]: cali9ae67724474: Gained carrier Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.163 [INFO][3616] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0 coredns-787d4945fb- kube-system dccf5af4-92a1-4f4c-ac0e-a30203c7f99d 719 0 2024-02-08 23:52:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-2-a-bd3a159777.novalocal coredns-787d4945fb-2d8cl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9ae67724474 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" Namespace="kube-system" Pod="coredns-787d4945fb-2d8cl" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-" Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.163 [INFO][3616] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" Namespace="kube-system" Pod="coredns-787d4945fb-2d8cl" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.202 [INFO][3629] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" HandleID="k8s-pod-network.1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.229 [INFO][3629] ipam_plugin.go 268: Auto assigning IP ContainerID="1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" HandleID="k8s-pod-network.1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ada10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-2-a-bd3a159777.novalocal", "pod":"coredns-787d4945fb-2d8cl", "timestamp":"2024-02-08 23:52:51.202510962 +0000 UTC"}, Hostname:"ci-3510-3-2-a-bd3a159777.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.231 [INFO][3629] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.231 [INFO][3629] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.231 [INFO][3629] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-2-a-bd3a159777.novalocal' Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.235 [INFO][3629] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.240 [INFO][3629] ipam.go 372: Looking up existing affinities for host host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.245 [INFO][3629] ipam.go 489: Trying affinity for 192.168.52.64/26 host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.247 [INFO][3629] ipam.go 155: Attempting to load block cidr=192.168.52.64/26 host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.250 [INFO][3629] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.64/26 host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.250 [INFO][3629] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.64/26 handle="k8s-pod-network.1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.253 [INFO][3629] ipam.go 1682: Creating new handle: k8s-pod-network.1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.260 [INFO][3629] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.64/26 handle="k8s-pod-network.1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.268 [INFO][3629] ipam.go 1216: Successfully claimed IPs: [192.168.52.66/26] block=192.168.52.64/26 handle="k8s-pod-network.1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.268 [INFO][3629] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.66/26] handle="k8s-pod-network.1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.268 [INFO][3629] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:52:51.315558 env[1139]: 2024-02-08 23:52:51.269 [INFO][3629] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.52.66/26] IPv6=[] ContainerID="1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" HandleID="k8s-pod-network.1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" Feb 8 23:52:51.316347 env[1139]: 2024-02-08 23:52:51.273 [INFO][3616] k8s.go 385: Populated endpoint ContainerID="1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" Namespace="kube-system" Pod="coredns-787d4945fb-2d8cl" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"dccf5af4-92a1-4f4c-ac0e-a30203c7f99d", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 52, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"", Pod:"coredns-787d4945fb-2d8cl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ae67724474", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:52:51.316347 env[1139]: 2024-02-08 23:52:51.273 [INFO][3616] k8s.go 386: Calico CNI using IPs: [192.168.52.66/32] ContainerID="1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" Namespace="kube-system" Pod="coredns-787d4945fb-2d8cl" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" Feb 8 23:52:51.316347 env[1139]: 2024-02-08 23:52:51.273 [INFO][3616] dataplane_linux.go 68: Setting the host side veth name to cali9ae67724474 ContainerID="1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" Namespace="kube-system" Pod="coredns-787d4945fb-2d8cl" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" Feb 8 23:52:51.316347 env[1139]: 2024-02-08 23:52:51.291 [INFO][3616] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" Namespace="kube-system" Pod="coredns-787d4945fb-2d8cl" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" Feb 8 23:52:51.316347 env[1139]: 2024-02-08 23:52:51.292 [INFO][3616] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" Namespace="kube-system" Pod="coredns-787d4945fb-2d8cl" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"dccf5af4-92a1-4f4c-ac0e-a30203c7f99d", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 52, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e", Pod:"coredns-787d4945fb-2d8cl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ae67724474", MAC:"4a:e5:e3:41:dc:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:52:51.316347 env[1139]: 2024-02-08 23:52:51.309 [INFO][3616] k8s.go 491: Wrote updated endpoint to datastore ContainerID="1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e" Namespace="kube-system" Pod="coredns-787d4945fb-2d8cl" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" Feb 8 23:52:51.331000 audit[3649]: NETFILTER_CFG table=filter:116 family=2 entries=40 op=nft_register_chain pid=3649 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:52:51.331000 audit[3649]: SYSCALL arch=c000003e syscall=46 success=yes exit=21096 a0=3 a1=7ffc04a82800 a2=0 a3=7ffc04a827ec items=0 ppid=3324 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:51.331000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:52:51.337058 env[1139]: time="2024-02-08T23:52:51.336995919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:52:51.337191 env[1139]: time="2024-02-08T23:52:51.337047757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:52:51.337191 env[1139]: time="2024-02-08T23:52:51.337066642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:52:51.338097 env[1139]: time="2024-02-08T23:52:51.337438016Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e pid=3656 runtime=io.containerd.runc.v2 Feb 8 23:52:51.391588 env[1139]: time="2024-02-08T23:52:51.391554313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-2d8cl,Uid:dccf5af4-92a1-4f4c-ac0e-a30203c7f99d,Namespace:kube-system,Attempt:1,} returns sandbox id \"1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e\"" Feb 8 23:52:51.396576 env[1139]: time="2024-02-08T23:52:51.396540755Z" level=info msg="CreateContainer within sandbox \"1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:52:51.408880 systemd[1]: run-containerd-runc-k8s.io-8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2-runc.EgsU1D.mount: Deactivated successfully. Feb 8 23:52:51.409009 systemd[1]: run-netns-cni\x2d0a5cd523\x2d9125\x2de6cb\x2d4019\x2dc00cf4e5a85b.mount: Deactivated successfully. Feb 8 23:52:51.424897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1986923008.mount: Deactivated successfully. Feb 8 23:52:51.432494 env[1139]: time="2024-02-08T23:52:51.432456123Z" level=info msg="CreateContainer within sandbox \"1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"484eea201745c7904c62752e457ffde4b303556ccab4494f6c404d8b676bca5b\"" Feb 8 23:52:51.433876 env[1139]: time="2024-02-08T23:52:51.433425163Z" level=info msg="StartContainer for \"484eea201745c7904c62752e457ffde4b303556ccab4494f6c404d8b676bca5b\"" Feb 8 23:52:51.500870 env[1139]: time="2024-02-08T23:52:51.500824775Z" level=info msg="StartContainer for \"484eea201745c7904c62752e457ffde4b303556ccab4494f6c404d8b676bca5b\" returns successfully" Feb 8 23:52:52.221371 kubelet[2113]: I0208 23:52:52.221256 2113 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-2d8cl" podStartSLOduration=46.22116498 pod.CreationTimestamp="2024-02-08 23:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:52:52.195844741 +0000 UTC m=+58.628461729" watchObservedRunningTime="2024-02-08 23:52:52.22116498 +0000 UTC m=+58.653781968" Feb 8 23:52:52.369000 audit[3755]: NETFILTER_CFG table=filter:117 family=2 entries=12 op=nft_register_rule pid=3755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:52.369000 audit[3755]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffd5c5166b0 a2=0 a3=7ffd5c51669c items=0 ppid=2271 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:52.369000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:52.372217 systemd-networkd[1029]: calicd8ac129c5b: Gained IPv6LL Feb 8 23:52:52.370000 audit[3755]: NETFILTER_CFG table=nat:118 family=2 entries=30 op=nft_register_rule pid=3755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:52.370000 audit[3755]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffd5c5166b0 a2=0 a3=7ffd5c51669c items=0 ppid=2271 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:52.370000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:52.437000 audit[3781]: NETFILTER_CFG table=filter:119 family=2 entries=9 op=nft_register_rule pid=3781 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:52.437000 audit[3781]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe25272480 a2=0 a3=7ffe2527246c items=0 ppid=2271 pid=3781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:52.437000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:52.438000 audit[3781]: NETFILTER_CFG table=nat:120 family=2 entries=51 op=nft_register_chain pid=3781 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:52.438000 audit[3781]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffe25272480 a2=0 a3=7ffe2527246c items=0 ppid=2271 pid=3781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:52.438000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:52.498665 systemd-networkd[1029]: cali9ae67724474: Gained IPv6LL Feb 8 23:52:52.627628 systemd-networkd[1029]: vxlan.calico: Gained IPv6LL Feb 8 23:52:52.710586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount918758857.mount: Deactivated successfully. Feb 8 23:52:52.940341 env[1139]: time="2024-02-08T23:52:52.939229697Z" level=info msg="StopPodSandbox for \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\"" Feb 8 23:52:53.125965 env[1139]: 2024-02-08 23:52:53.070 [INFO][3798] k8s.go 578: Cleaning up netns ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Feb 8 23:52:53.125965 env[1139]: 2024-02-08 23:52:53.070 [INFO][3798] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" iface="eth0" netns="/var/run/netns/cni-0d9b87ac-6881-be44-c66a-443d787d2655" Feb 8 23:52:53.125965 env[1139]: 2024-02-08 23:52:53.071 [INFO][3798] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" iface="eth0" netns="/var/run/netns/cni-0d9b87ac-6881-be44-c66a-443d787d2655" Feb 8 23:52:53.125965 env[1139]: 2024-02-08 23:52:53.071 [INFO][3798] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" iface="eth0" netns="/var/run/netns/cni-0d9b87ac-6881-be44-c66a-443d787d2655" Feb 8 23:52:53.125965 env[1139]: 2024-02-08 23:52:53.071 [INFO][3798] k8s.go 585: Releasing IP address(es) ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Feb 8 23:52:53.125965 env[1139]: 2024-02-08 23:52:53.071 [INFO][3798] utils.go 188: Calico CNI releasing IP address ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Feb 8 23:52:53.125965 env[1139]: 2024-02-08 23:52:53.112 [INFO][3804] ipam_plugin.go 415: Releasing address using handleID ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" HandleID="k8s-pod-network.10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" Feb 8 23:52:53.125965 env[1139]: 2024-02-08 23:52:53.112 [INFO][3804] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:52:53.125965 env[1139]: 2024-02-08 23:52:53.112 [INFO][3804] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:52:53.125965 env[1139]: 2024-02-08 23:52:53.121 [WARNING][3804] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" HandleID="k8s-pod-network.10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" Feb 8 23:52:53.125965 env[1139]: 2024-02-08 23:52:53.121 [INFO][3804] ipam_plugin.go 443: Releasing address using workloadID ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" HandleID="k8s-pod-network.10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" Feb 8 23:52:53.125965 env[1139]: 2024-02-08 23:52:53.122 [INFO][3804] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:52:53.125965 env[1139]: 2024-02-08 23:52:53.124 [INFO][3798] k8s.go 591: Teardown processing complete. ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Feb 8 23:52:53.129933 env[1139]: time="2024-02-08T23:52:53.128785106Z" level=info msg="TearDown network for sandbox \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\" successfully" Feb 8 23:52:53.129933 env[1139]: time="2024-02-08T23:52:53.128818008Z" level=info msg="StopPodSandbox for \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\" returns successfully" Feb 8 23:52:53.129933 env[1139]: time="2024-02-08T23:52:53.129456071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-6c8kq,Uid:3774f726-461c-4a92-8c72-de7a78a4ac63,Namespace:kube-system,Attempt:1,}" Feb 8 23:52:53.128503 systemd[1]: run-netns-cni\x2d0d9b87ac\x2d6881\x2dbe44\x2dc66a\x2d443d787d2655.mount: Deactivated successfully. Feb 8 23:52:53.288964 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:52:53.289075 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7ce5721c329: link becomes ready Feb 8 23:52:53.286745 systemd-networkd[1029]: cali7ce5721c329: Link UP Feb 8 23:52:53.289212 systemd-networkd[1029]: cali7ce5721c329: Gained carrier Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.186 [INFO][3810] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0 coredns-787d4945fb- kube-system 3774f726-461c-4a92-8c72-de7a78a4ac63 740 0 2024-02-08 23:52:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-2-a-bd3a159777.novalocal coredns-787d4945fb-6c8kq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7ce5721c329 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" Namespace="kube-system" Pod="coredns-787d4945fb-6c8kq" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-" Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.186 [INFO][3810] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" Namespace="kube-system" Pod="coredns-787d4945fb-6c8kq" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.224 [INFO][3822] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" HandleID="k8s-pod-network.4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.237 [INFO][3822] ipam_plugin.go 268: Auto assigning IP ContainerID="4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" HandleID="k8s-pod-network.4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d7b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-2-a-bd3a159777.novalocal", "pod":"coredns-787d4945fb-6c8kq", "timestamp":"2024-02-08 23:52:53.224644709 +0000 UTC"}, Hostname:"ci-3510-3-2-a-bd3a159777.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.237 [INFO][3822] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.237 [INFO][3822] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.237 [INFO][3822] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-2-a-bd3a159777.novalocal' Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.240 [INFO][3822] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.245 [INFO][3822] ipam.go 372: Looking up existing affinities for host host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.252 [INFO][3822] ipam.go 489: Trying affinity for 192.168.52.64/26 host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.254 [INFO][3822] ipam.go 155: Attempting to load block cidr=192.168.52.64/26 host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.257 [INFO][3822] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.64/26 host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.257 [INFO][3822] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.64/26 handle="k8s-pod-network.4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.266 [INFO][3822] ipam.go 1682: Creating new handle: k8s-pod-network.4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81 Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.271 [INFO][3822] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.64/26 handle="k8s-pod-network.4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.278 [INFO][3822] ipam.go 1216: Successfully claimed IPs: [192.168.52.67/26] block=192.168.52.64/26 handle="k8s-pod-network.4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.278 [INFO][3822] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.67/26] handle="k8s-pod-network.4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.279 [INFO][3822] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:52:53.314988 env[1139]: 2024-02-08 23:52:53.279 [INFO][3822] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.52.67/26] IPv6=[] ContainerID="4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" HandleID="k8s-pod-network.4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" Feb 8 23:52:53.318979 env[1139]: 2024-02-08 23:52:53.281 [INFO][3810] k8s.go 385: Populated endpoint ContainerID="4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" Namespace="kube-system" Pod="coredns-787d4945fb-6c8kq" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"3774f726-461c-4a92-8c72-de7a78a4ac63", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 52, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"", Pod:"coredns-787d4945fb-6c8kq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ce5721c329", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:52:53.318979 env[1139]: 2024-02-08 23:52:53.281 [INFO][3810] k8s.go 386: Calico CNI using IPs: [192.168.52.67/32] ContainerID="4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" Namespace="kube-system" Pod="coredns-787d4945fb-6c8kq" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" Feb 8 23:52:53.318979 env[1139]: 2024-02-08 23:52:53.282 [INFO][3810] dataplane_linux.go 68: Setting the host side veth name to cali7ce5721c329 ContainerID="4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" Namespace="kube-system" Pod="coredns-787d4945fb-6c8kq" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" Feb 8 23:52:53.318979 env[1139]: 2024-02-08 23:52:53.295 [INFO][3810] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" Namespace="kube-system" Pod="coredns-787d4945fb-6c8kq" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" Feb 8 23:52:53.318979 env[1139]: 2024-02-08 23:52:53.296 [INFO][3810] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" Namespace="kube-system" Pod="coredns-787d4945fb-6c8kq" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"3774f726-461c-4a92-8c72-de7a78a4ac63", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 52, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81", Pod:"coredns-787d4945fb-6c8kq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ce5721c329", MAC:"42:f4:13:11:11:a5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:52:53.318979 env[1139]: 2024-02-08 23:52:53.306 [INFO][3810] k8s.go 491: Wrote updated endpoint to datastore ContainerID="4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81" Namespace="kube-system" Pod="coredns-787d4945fb-6c8kq" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" Feb 8 23:52:53.334000 audit[3837]: NETFILTER_CFG table=filter:121 family=2 entries=34 op=nft_register_chain pid=3837 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:52:53.334000 audit[3837]: SYSCALL arch=c000003e syscall=46 success=yes exit=17900 a0=3 a1=7fffe5813050 a2=0 a3=7fffe581303c items=0 ppid=3324 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:53.334000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:52:53.423367 env[1139]: time="2024-02-08T23:52:53.423207018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:52:53.423617 env[1139]: time="2024-02-08T23:52:53.423374250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:52:53.423617 env[1139]: time="2024-02-08T23:52:53.423412491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:52:53.423831 env[1139]: time="2024-02-08T23:52:53.423682898Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81 pid=3850 runtime=io.containerd.runc.v2 Feb 8 23:52:53.524291 env[1139]: time="2024-02-08T23:52:53.524233409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-6c8kq,Uid:3774f726-461c-4a92-8c72-de7a78a4ac63,Namespace:kube-system,Attempt:1,} returns sandbox id \"4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81\"" Feb 8 23:52:53.527283 env[1139]: time="2024-02-08T23:52:53.527254888Z" level=info msg="CreateContainer within sandbox \"4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:52:53.548602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2247770904.mount: Deactivated successfully. Feb 8 23:52:53.561987 env[1139]: time="2024-02-08T23:52:53.561868165Z" level=info msg="CreateContainer within sandbox \"4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"03743f40bacb0c905c13f741ec0bfcb71adcd587e13203ad15f048a98bbee20c\"" Feb 8 23:52:53.568761 env[1139]: time="2024-02-08T23:52:53.568690910Z" level=info msg="StartContainer for \"03743f40bacb0c905c13f741ec0bfcb71adcd587e13203ad15f048a98bbee20c\"" Feb 8 23:52:53.667310 env[1139]: time="2024-02-08T23:52:53.667240512Z" level=info msg="StartContainer for \"03743f40bacb0c905c13f741ec0bfcb71adcd587e13203ad15f048a98bbee20c\" returns successfully" Feb 8 23:52:53.777375 env[1139]: time="2024-02-08T23:52:53.777268922Z" level=info msg="StopPodSandbox for \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\"" Feb 8 23:52:53.782445 env[1139]: time="2024-02-08T23:52:53.782371321Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:53.791957 env[1139]: time="2024-02-08T23:52:53.791895505Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:53.808075 env[1139]: time="2024-02-08T23:52:53.806339287Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:53.815950 env[1139]: time="2024-02-08T23:52:53.815887937Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:53.818567 env[1139]: time="2024-02-08T23:52:53.818480935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 8 23:52:53.823487 env[1139]: time="2024-02-08T23:52:53.823425388Z" level=info msg="CreateContainer within sandbox \"8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 8 23:52:53.903753 env[1139]: time="2024-02-08T23:52:53.903687442Z" level=info msg="CreateContainer within sandbox \"8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8549a8153abedf65b0776c1b4ad4456e572dc8eb3b575177de82074a976a06e9\"" Feb 8 23:52:53.908143 env[1139]: time="2024-02-08T23:52:53.908102546Z" level=info msg="StartContainer for \"8549a8153abedf65b0776c1b4ad4456e572dc8eb3b575177de82074a976a06e9\"" Feb 8 23:52:53.911245 env[1139]: 2024-02-08 23:52:53.858 [WARNING][3937] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"3774f726-461c-4a92-8c72-de7a78a4ac63", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 52, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81", Pod:"coredns-787d4945fb-6c8kq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ce5721c329", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:52:53.911245 env[1139]: 2024-02-08 23:52:53.859 [INFO][3937] k8s.go 578: Cleaning up netns ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Feb 8 23:52:53.911245 env[1139]: 2024-02-08 23:52:53.859 [INFO][3937] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" iface="eth0" netns="" Feb 8 23:52:53.911245 env[1139]: 2024-02-08 23:52:53.859 [INFO][3937] k8s.go 585: Releasing IP address(es) ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Feb 8 23:52:53.911245 env[1139]: 2024-02-08 23:52:53.859 [INFO][3937] utils.go 188: Calico CNI releasing IP address ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Feb 8 23:52:53.911245 env[1139]: 2024-02-08 23:52:53.886 [INFO][3943] ipam_plugin.go 415: Releasing address using handleID ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" HandleID="k8s-pod-network.10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" Feb 8 23:52:53.911245 env[1139]: 2024-02-08 23:52:53.886 [INFO][3943] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:52:53.911245 env[1139]: 2024-02-08 23:52:53.886 [INFO][3943] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:52:53.911245 env[1139]: 2024-02-08 23:52:53.896 [WARNING][3943] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" HandleID="k8s-pod-network.10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" Feb 8 23:52:53.911245 env[1139]: 2024-02-08 23:52:53.896 [INFO][3943] ipam_plugin.go 443: Releasing address using workloadID ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" HandleID="k8s-pod-network.10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" Feb 8 23:52:53.911245 env[1139]: 2024-02-08 23:52:53.897 [INFO][3943] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:52:53.911245 env[1139]: 2024-02-08 23:52:53.899 [INFO][3937] k8s.go 591: Teardown processing complete. ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Feb 8 23:52:53.911822 env[1139]: time="2024-02-08T23:52:53.911248878Z" level=info msg="TearDown network for sandbox \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\" successfully" Feb 8 23:52:53.911822 env[1139]: time="2024-02-08T23:52:53.911329849Z" level=info msg="StopPodSandbox for \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\" returns successfully" Feb 8 23:52:53.911944 env[1139]: time="2024-02-08T23:52:53.911913320Z" level=info msg="RemovePodSandbox for \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\"" Feb 8 23:52:53.912006 env[1139]: time="2024-02-08T23:52:53.911946623Z" level=info msg="Forcibly stopping sandbox \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\"" Feb 8 23:52:54.000867 env[1139]: time="2024-02-08T23:52:54.000827118Z" level=info msg="StartContainer for \"8549a8153abedf65b0776c1b4ad4456e572dc8eb3b575177de82074a976a06e9\" returns successfully" Feb 8 23:52:54.004401 env[1139]: time="2024-02-08T23:52:54.004369654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 8 23:52:54.036164 env[1139]: 2024-02-08 23:52:53.979 [WARNING][3975] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"3774f726-461c-4a92-8c72-de7a78a4ac63", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 52, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"4a01e5443e90404a20517c53bec2e6e213f6f65d598efd8093919891c394ae81", Pod:"coredns-787d4945fb-6c8kq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ce5721c329", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:52:54.036164 env[1139]: 2024-02-08 23:52:53.980 [INFO][3975] k8s.go 578: Cleaning up netns ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Feb 8 23:52:54.036164 env[1139]: 2024-02-08 23:52:53.980 [INFO][3975] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" iface="eth0" netns="" Feb 8 23:52:54.036164 env[1139]: 2024-02-08 23:52:53.980 [INFO][3975] k8s.go 585: Releasing IP address(es) ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Feb 8 23:52:54.036164 env[1139]: 2024-02-08 23:52:53.980 [INFO][3975] utils.go 188: Calico CNI releasing IP address ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Feb 8 23:52:54.036164 env[1139]: 2024-02-08 23:52:54.022 [INFO][3998] ipam_plugin.go 415: Releasing address using handleID ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" HandleID="k8s-pod-network.10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" Feb 8 23:52:54.036164 env[1139]: 2024-02-08 23:52:54.022 [INFO][3998] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:52:54.036164 env[1139]: 2024-02-08 23:52:54.022 [INFO][3998] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:52:54.036164 env[1139]: 2024-02-08 23:52:54.030 [WARNING][3998] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" HandleID="k8s-pod-network.10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" Feb 8 23:52:54.036164 env[1139]: 2024-02-08 23:52:54.030 [INFO][3998] ipam_plugin.go 443: Releasing address using workloadID ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" HandleID="k8s-pod-network.10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--6c8kq-eth0" Feb 8 23:52:54.036164 env[1139]: 2024-02-08 23:52:54.032 [INFO][3998] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:52:54.036164 env[1139]: 2024-02-08 23:52:54.034 [INFO][3975] k8s.go 591: Teardown processing complete. ContainerID="10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532" Feb 8 23:52:54.036718 env[1139]: time="2024-02-08T23:52:54.036216730Z" level=info msg="TearDown network for sandbox \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\" successfully" Feb 8 23:52:54.043045 env[1139]: time="2024-02-08T23:52:54.042984178Z" level=info msg="RemovePodSandbox \"10af7c5060da8d5004dedbba8b7ca3acdd6c8418b50aa60cf4d1840d4ae79532\" returns successfully" Feb 8 23:52:54.043611 env[1139]: time="2024-02-08T23:52:54.043587597Z" level=info msg="StopPodSandbox for \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\"" Feb 8 23:52:54.126433 env[1139]: 2024-02-08 23:52:54.080 [WARNING][4028] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c90b1627-bdd7-4b0e-9b33-829b081056fe", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 52, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2", Pod:"csi-node-driver-cp9fc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calicd8ac129c5b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:52:54.126433 env[1139]: 2024-02-08 23:52:54.081 [INFO][4028] k8s.go 578: Cleaning up netns ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Feb 8 23:52:54.126433 env[1139]: 2024-02-08 23:52:54.081 [INFO][4028] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" iface="eth0" netns="" Feb 8 23:52:54.126433 env[1139]: 2024-02-08 23:52:54.081 [INFO][4028] k8s.go 585: Releasing IP address(es) ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Feb 8 23:52:54.126433 env[1139]: 2024-02-08 23:52:54.081 [INFO][4028] utils.go 188: Calico CNI releasing IP address ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Feb 8 23:52:54.126433 env[1139]: 2024-02-08 23:52:54.102 [INFO][4035] ipam_plugin.go 415: Releasing address using handleID ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" HandleID="k8s-pod-network.e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" Feb 8 23:52:54.126433 env[1139]: 2024-02-08 23:52:54.102 [INFO][4035] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:52:54.126433 env[1139]: 2024-02-08 23:52:54.102 [INFO][4035] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:52:54.126433 env[1139]: 2024-02-08 23:52:54.113 [WARNING][4035] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" HandleID="k8s-pod-network.e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" Feb 8 23:52:54.126433 env[1139]: 2024-02-08 23:52:54.113 [INFO][4035] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" HandleID="k8s-pod-network.e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" Feb 8 23:52:54.126433 env[1139]: 2024-02-08 23:52:54.115 [INFO][4035] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:52:54.126433 env[1139]: 2024-02-08 23:52:54.121 [INFO][4028] k8s.go 591: Teardown processing complete. ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Feb 8 23:52:54.126433 env[1139]: time="2024-02-08T23:52:54.124465580Z" level=info msg="TearDown network for sandbox \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\" successfully" Feb 8 23:52:54.126433 env[1139]: time="2024-02-08T23:52:54.124495807Z" level=info msg="StopPodSandbox for \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\" returns successfully" Feb 8 23:52:54.128694 env[1139]: time="2024-02-08T23:52:54.127734816Z" level=info msg="RemovePodSandbox for \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\"" Feb 8 23:52:54.128694 env[1139]: time="2024-02-08T23:52:54.128391183Z" level=info msg="Forcibly stopping sandbox \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\"" Feb 8 23:52:54.209166 kubelet[2113]: I0208 23:52:54.208454 2113 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-6c8kq" podStartSLOduration=48.208398949 pod.CreationTimestamp="2024-02-08 23:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:52:54.19085238 +0000 UTC m=+60.623469328" watchObservedRunningTime="2024-02-08 23:52:54.208398949 +0000 UTC m=+60.641015897" Feb 8 23:52:54.247860 env[1139]: 2024-02-08 23:52:54.177 [WARNING][4055] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c90b1627-bdd7-4b0e-9b33-829b081056fe", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 52, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2", Pod:"csi-node-driver-cp9fc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calicd8ac129c5b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:52:54.247860 env[1139]: 2024-02-08 23:52:54.177 [INFO][4055] k8s.go 578: Cleaning up netns ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Feb 8 23:52:54.247860 env[1139]: 2024-02-08 23:52:54.177 [INFO][4055] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" iface="eth0" netns="" Feb 8 23:52:54.247860 env[1139]: 2024-02-08 23:52:54.177 [INFO][4055] k8s.go 585: Releasing IP address(es) ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Feb 8 23:52:54.247860 env[1139]: 2024-02-08 23:52:54.177 [INFO][4055] utils.go 188: Calico CNI releasing IP address ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Feb 8 23:52:54.247860 env[1139]: 2024-02-08 23:52:54.233 [INFO][4061] ipam_plugin.go 415: Releasing address using handleID ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" HandleID="k8s-pod-network.e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" Feb 8 23:52:54.247860 env[1139]: 2024-02-08 23:52:54.234 [INFO][4061] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:52:54.247860 env[1139]: 2024-02-08 23:52:54.234 [INFO][4061] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:52:54.247860 env[1139]: 2024-02-08 23:52:54.242 [WARNING][4061] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" HandleID="k8s-pod-network.e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" Feb 8 23:52:54.247860 env[1139]: 2024-02-08 23:52:54.243 [INFO][4061] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" HandleID="k8s-pod-network.e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-csi--node--driver--cp9fc-eth0" Feb 8 23:52:54.247860 env[1139]: 2024-02-08 23:52:54.244 [INFO][4061] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:52:54.247860 env[1139]: 2024-02-08 23:52:54.246 [INFO][4055] k8s.go 591: Teardown processing complete. ContainerID="e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b" Feb 8 23:52:54.249609 env[1139]: time="2024-02-08T23:52:54.247885444Z" level=info msg="TearDown network for sandbox \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\" successfully" Feb 8 23:52:54.251253 env[1139]: time="2024-02-08T23:52:54.251205324Z" level=info msg="RemovePodSandbox \"e3179727531b9c5a8e13a640ece75abc35698942e9994a54f62b37ee905b469b\" returns successfully" Feb 8 23:52:54.251812 env[1139]: time="2024-02-08T23:52:54.251779868Z" level=info msg="StopPodSandbox for \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\"" Feb 8 23:52:54.270000 audit[4108]: NETFILTER_CFG table=filter:122 family=2 entries=6 op=nft_register_rule pid=4108 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:54.270000 audit[4108]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffcea2b53f0 a2=0 a3=7ffcea2b53dc items=0 ppid=2271 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:54.270000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:54.272000 audit[4108]: NETFILTER_CFG table=nat:123 family=2 entries=60 op=nft_register_rule pid=4108 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:54.272000 audit[4108]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffcea2b53f0 a2=0 a3=7ffcea2b53dc items=0 ppid=2271 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:54.272000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:54.325000 audit[4142]: NETFILTER_CFG table=filter:124 family=2 entries=6 op=nft_register_rule pid=4142 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:54.330124 env[1139]: 2024-02-08 23:52:54.294 [WARNING][4097] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"dccf5af4-92a1-4f4c-ac0e-a30203c7f99d", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 52, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e", Pod:"coredns-787d4945fb-2d8cl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ae67724474", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:52:54.330124 env[1139]: 2024-02-08 23:52:54.294 [INFO][4097] k8s.go 578: Cleaning up netns ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Feb 8 23:52:54.330124 env[1139]: 2024-02-08 23:52:54.295 [INFO][4097] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" iface="eth0" netns="" Feb 8 23:52:54.330124 env[1139]: 2024-02-08 23:52:54.295 [INFO][4097] k8s.go 585: Releasing IP address(es) ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Feb 8 23:52:54.330124 env[1139]: 2024-02-08 23:52:54.295 [INFO][4097] utils.go 188: Calico CNI releasing IP address ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Feb 8 23:52:54.330124 env[1139]: 2024-02-08 23:52:54.316 [INFO][4123] ipam_plugin.go 415: Releasing address using handleID ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" HandleID="k8s-pod-network.7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" Feb 8 23:52:54.330124 env[1139]: 2024-02-08 23:52:54.317 [INFO][4123] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:52:54.330124 env[1139]: 2024-02-08 23:52:54.317 [INFO][4123] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:52:54.330124 env[1139]: 2024-02-08 23:52:54.325 [WARNING][4123] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" HandleID="k8s-pod-network.7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" Feb 8 23:52:54.330124 env[1139]: 2024-02-08 23:52:54.325 [INFO][4123] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" HandleID="k8s-pod-network.7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" Feb 8 23:52:54.330124 env[1139]: 2024-02-08 23:52:54.327 [INFO][4123] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:52:54.330124 env[1139]: 2024-02-08 23:52:54.328 [INFO][4097] k8s.go 591: Teardown processing complete. ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Feb 8 23:52:54.330641 env[1139]: time="2024-02-08T23:52:54.330611715Z" level=info msg="TearDown network for sandbox \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\" successfully" Feb 8 23:52:54.330713 env[1139]: time="2024-02-08T23:52:54.330696032Z" level=info msg="StopPodSandbox for \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\" returns successfully" Feb 8 23:52:54.325000 audit[4142]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffec30e4cf0 a2=0 a3=7ffec30e4cdc items=0 ppid=2271 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:54.325000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:54.331454 env[1139]: time="2024-02-08T23:52:54.331412874Z" level=info msg="RemovePodSandbox for \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\"" Feb 8 23:52:54.331511 env[1139]: time="2024-02-08T23:52:54.331458779Z" level=info msg="Forcibly stopping sandbox \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\"" Feb 8 23:52:54.345000 audit[4142]: NETFILTER_CFG table=nat:125 family=2 entries=72 op=nft_register_chain pid=4142 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:52:54.345000 audit[4142]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffec30e4cf0 a2=0 a3=7ffec30e4cdc items=0 ppid=2271 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:52:54.345000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:52:54.424218 env[1139]: 2024-02-08 23:52:54.373 [WARNING][4156] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"dccf5af4-92a1-4f4c-ac0e-a30203c7f99d", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 52, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"1015f03f5408033113adb220dca02262fd34d53d6c237978babcd95696e1884e", Pod:"coredns-787d4945fb-2d8cl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ae67724474", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:52:54.424218 env[1139]: 2024-02-08 23:52:54.373 [INFO][4156] k8s.go 578: Cleaning up netns ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Feb 8 23:52:54.424218 env[1139]: 2024-02-08 23:52:54.373 [INFO][4156] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" iface="eth0" netns="" Feb 8 23:52:54.424218 env[1139]: 2024-02-08 23:52:54.373 [INFO][4156] k8s.go 585: Releasing IP address(es) ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Feb 8 23:52:54.424218 env[1139]: 2024-02-08 23:52:54.373 [INFO][4156] utils.go 188: Calico CNI releasing IP address ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Feb 8 23:52:54.424218 env[1139]: 2024-02-08 23:52:54.400 [INFO][4163] ipam_plugin.go 415: Releasing address using handleID ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" HandleID="k8s-pod-network.7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" Feb 8 23:52:54.424218 env[1139]: 2024-02-08 23:52:54.400 [INFO][4163] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:52:54.424218 env[1139]: 2024-02-08 23:52:54.400 [INFO][4163] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:52:54.424218 env[1139]: 2024-02-08 23:52:54.417 [WARNING][4163] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" HandleID="k8s-pod-network.7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" Feb 8 23:52:54.424218 env[1139]: 2024-02-08 23:52:54.418 [INFO][4163] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" HandleID="k8s-pod-network.7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-coredns--787d4945fb--2d8cl-eth0" Feb 8 23:52:54.424218 env[1139]: 2024-02-08 23:52:54.420 [INFO][4163] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:52:54.424218 env[1139]: 2024-02-08 23:52:54.422 [INFO][4156] k8s.go 591: Teardown processing complete. ContainerID="7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010" Feb 8 23:52:54.424218 env[1139]: time="2024-02-08T23:52:54.424169855Z" level=info msg="TearDown network for sandbox \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\" successfully" Feb 8 23:52:54.433636 env[1139]: time="2024-02-08T23:52:54.433601857Z" level=info msg="RemovePodSandbox \"7c4e073acd52cd9dffd70616815fe4e0b3706cbb5b082188e3f7cd49cdacf010\" returns successfully" Feb 8 23:52:55.058602 systemd-networkd[1029]: cali7ce5721c329: Gained IPv6LL Feb 8 23:52:56.560734 env[1139]: time="2024-02-08T23:52:56.560648614Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:56.563255 env[1139]: time="2024-02-08T23:52:56.563199591Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:56.565607 env[1139]: time="2024-02-08T23:52:56.565561783Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:56.568043 env[1139]: time="2024-02-08T23:52:56.567947219Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:52:56.568702 env[1139]: time="2024-02-08T23:52:56.568655896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 8 23:52:56.573361 env[1139]: time="2024-02-08T23:52:56.572145219Z" level=info msg="CreateContainer within sandbox \"8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 8 23:52:56.587066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1339581434.mount: Deactivated successfully. Feb 8 23:52:56.612217 env[1139]: time="2024-02-08T23:52:56.607047714Z" level=info msg="CreateContainer within sandbox \"8f8b90e64cacc6910d15f8ee58129bd9b573978a878c1d7546c7da5b380c01d2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fe9a8f74d89db5d965b92cd731ce514216db7ef37b09b33f51b44584b7d15d2e\"" Feb 8 23:52:56.612217 env[1139]: time="2024-02-08T23:52:56.607903567Z" level=info msg="StartContainer for \"fe9a8f74d89db5d965b92cd731ce514216db7ef37b09b33f51b44584b7d15d2e\"" Feb 8 23:52:56.660757 systemd[1]: run-containerd-runc-k8s.io-fe9a8f74d89db5d965b92cd731ce514216db7ef37b09b33f51b44584b7d15d2e-runc.B0GJEA.mount: Deactivated successfully. Feb 8 23:52:56.721196 env[1139]: time="2024-02-08T23:52:56.720992525Z" level=info msg="StartContainer for \"fe9a8f74d89db5d965b92cd731ce514216db7ef37b09b33f51b44584b7d15d2e\" returns successfully" Feb 8 23:52:56.991857 kubelet[2113]: I0208 23:52:56.991638 2113 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 8 23:52:56.992655 kubelet[2113]: I0208 23:52:56.992439 2113 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 8 23:53:01.941266 env[1139]: time="2024-02-08T23:53:01.939598540Z" level=info msg="StopPodSandbox for \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\"" Feb 8 23:53:02.050340 kubelet[2113]: I0208 23:53:02.048786 2113 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-cp9fc" podStartSLOduration=-9.223371987806028e+09 pod.CreationTimestamp="2024-02-08 23:52:13 +0000 UTC" firstStartedPulling="2024-02-08 23:52:50.73540169 +0000 UTC m=+57.168018628" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:52:57.246335554 +0000 UTC m=+63.678952542" watchObservedRunningTime="2024-02-08 23:53:02.048747106 +0000 UTC m=+68.481364044" Feb 8 23:53:02.094234 env[1139]: 2024-02-08 23:53:02.051 [INFO][4234] k8s.go 578: Cleaning up netns ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Feb 8 23:53:02.094234 env[1139]: 2024-02-08 23:53:02.051 [INFO][4234] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" iface="eth0" netns="/var/run/netns/cni-7e6dacec-fd04-7ea0-b43b-cecbaf7c9d76" Feb 8 23:53:02.094234 env[1139]: 2024-02-08 23:53:02.051 [INFO][4234] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" iface="eth0" netns="/var/run/netns/cni-7e6dacec-fd04-7ea0-b43b-cecbaf7c9d76" Feb 8 23:53:02.094234 env[1139]: 2024-02-08 23:53:02.052 [INFO][4234] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" iface="eth0" netns="/var/run/netns/cni-7e6dacec-fd04-7ea0-b43b-cecbaf7c9d76" Feb 8 23:53:02.094234 env[1139]: 2024-02-08 23:53:02.052 [INFO][4234] k8s.go 585: Releasing IP address(es) ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Feb 8 23:53:02.094234 env[1139]: 2024-02-08 23:53:02.052 [INFO][4234] utils.go 188: Calico CNI releasing IP address ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Feb 8 23:53:02.094234 env[1139]: 2024-02-08 23:53:02.076 [INFO][4242] ipam_plugin.go 415: Releasing address using handleID ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" HandleID="k8s-pod-network.a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" Feb 8 23:53:02.094234 env[1139]: 2024-02-08 23:53:02.076 [INFO][4242] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:53:02.094234 env[1139]: 2024-02-08 23:53:02.076 [INFO][4242] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:53:02.094234 env[1139]: 2024-02-08 23:53:02.085 [WARNING][4242] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" HandleID="k8s-pod-network.a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" Feb 8 23:53:02.094234 env[1139]: 2024-02-08 23:53:02.085 [INFO][4242] ipam_plugin.go 443: Releasing address using workloadID ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" HandleID="k8s-pod-network.a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" Feb 8 23:53:02.094234 env[1139]: 2024-02-08 23:53:02.088 [INFO][4242] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:53:02.094234 env[1139]: 2024-02-08 23:53:02.090 [INFO][4234] k8s.go 591: Teardown processing complete. ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Feb 8 23:53:02.095518 env[1139]: time="2024-02-08T23:53:02.095478786Z" level=info msg="TearDown network for sandbox \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\" successfully" Feb 8 23:53:02.095604 env[1139]: time="2024-02-08T23:53:02.095585296Z" level=info msg="StopPodSandbox for \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\" returns successfully" Feb 8 23:53:02.100621 env[1139]: time="2024-02-08T23:53:02.100588678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f9d567d-vn2vk,Uid:d775a679-abfc-4c4d-a44c-4e5893e5a899,Namespace:calico-system,Attempt:1,}" Feb 8 23:53:02.101165 systemd[1]: run-netns-cni\x2d7e6dacec\x2dfd04\x2d7ea0\x2db43b\x2dcecbaf7c9d76.mount: Deactivated successfully. Feb 8 23:53:02.253085 systemd-networkd[1029]: cali769ddc0980e: Link UP Feb 8 23:53:02.254862 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:53:02.254924 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali769ddc0980e: link becomes ready Feb 8 23:53:02.256233 systemd-networkd[1029]: cali769ddc0980e: Gained carrier Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.166 [INFO][4249] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0 calico-kube-controllers-78f9d567d- calico-system d775a679-abfc-4c4d-a44c-4e5893e5a899 787 0 2024-02-08 23:52:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78f9d567d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510-3-2-a-bd3a159777.novalocal calico-kube-controllers-78f9d567d-vn2vk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali769ddc0980e [] []}} ContainerID="30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" Namespace="calico-system" Pod="calico-kube-controllers-78f9d567d-vn2vk" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-" Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.166 [INFO][4249] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" Namespace="calico-system" Pod="calico-kube-controllers-78f9d567d-vn2vk" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.201 [INFO][4260] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" HandleID="k8s-pod-network.30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.215 [INFO][4260] ipam_plugin.go 268: Auto assigning IP ContainerID="30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" HandleID="k8s-pod-network.30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c2a60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-2-a-bd3a159777.novalocal", "pod":"calico-kube-controllers-78f9d567d-vn2vk", "timestamp":"2024-02-08 23:53:02.20183125 +0000 UTC"}, Hostname:"ci-3510-3-2-a-bd3a159777.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.215 [INFO][4260] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.215 [INFO][4260] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.215 [INFO][4260] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-2-a-bd3a159777.novalocal' Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.217 [INFO][4260] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.222 [INFO][4260] ipam.go 372: Looking up existing affinities for host host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.227 [INFO][4260] ipam.go 489: Trying affinity for 192.168.52.64/26 host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.230 [INFO][4260] ipam.go 155: Attempting to load block cidr=192.168.52.64/26 host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.233 [INFO][4260] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.64/26 host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.233 [INFO][4260] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.64/26 handle="k8s-pod-network.30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.236 [INFO][4260] ipam.go 1682: Creating new handle: k8s-pod-network.30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8 Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.240 [INFO][4260] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.64/26 handle="k8s-pod-network.30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.248 [INFO][4260] ipam.go 1216: Successfully claimed IPs: [192.168.52.68/26] block=192.168.52.64/26 handle="k8s-pod-network.30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.249 [INFO][4260] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.68/26] handle="k8s-pod-network.30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.249 [INFO][4260] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:53:02.282136 env[1139]: 2024-02-08 23:53:02.249 [INFO][4260] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.52.68/26] IPv6=[] ContainerID="30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" HandleID="k8s-pod-network.30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" Feb 8 23:53:02.283924 env[1139]: 2024-02-08 23:53:02.251 [INFO][4249] k8s.go 385: Populated endpoint ContainerID="30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" Namespace="calico-system" Pod="calico-kube-controllers-78f9d567d-vn2vk" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0", GenerateName:"calico-kube-controllers-78f9d567d-", Namespace:"calico-system", SelfLink:"", UID:"d775a679-abfc-4c4d-a44c-4e5893e5a899", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 52, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f9d567d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"", Pod:"calico-kube-controllers-78f9d567d-vn2vk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali769ddc0980e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:53:02.283924 env[1139]: 2024-02-08 23:53:02.251 [INFO][4249] k8s.go 386: Calico CNI using IPs: [192.168.52.68/32] ContainerID="30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" Namespace="calico-system" Pod="calico-kube-controllers-78f9d567d-vn2vk" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" Feb 8 23:53:02.283924 env[1139]: 2024-02-08 23:53:02.251 [INFO][4249] dataplane_linux.go 68: Setting the host side veth name to cali769ddc0980e ContainerID="30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" Namespace="calico-system" Pod="calico-kube-controllers-78f9d567d-vn2vk" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" Feb 8 23:53:02.283924 env[1139]: 2024-02-08 23:53:02.256 [INFO][4249] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" Namespace="calico-system" Pod="calico-kube-controllers-78f9d567d-vn2vk" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" Feb 8 23:53:02.283924 env[1139]: 2024-02-08 23:53:02.256 [INFO][4249] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" Namespace="calico-system" Pod="calico-kube-controllers-78f9d567d-vn2vk" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0", GenerateName:"calico-kube-controllers-78f9d567d-", Namespace:"calico-system", SelfLink:"", UID:"d775a679-abfc-4c4d-a44c-4e5893e5a899", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 52, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f9d567d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8", Pod:"calico-kube-controllers-78f9d567d-vn2vk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali769ddc0980e", MAC:"92:a7:2b:c2:38:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:53:02.283924 env[1139]: 2024-02-08 23:53:02.279 [INFO][4249] k8s.go 491: Wrote updated endpoint to datastore ContainerID="30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8" Namespace="calico-system" Pod="calico-kube-controllers-78f9d567d-vn2vk" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" Feb 8 23:53:02.294895 env[1139]: time="2024-02-08T23:53:02.294832151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:53:02.295110 env[1139]: time="2024-02-08T23:53:02.295081790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:53:02.295204 env[1139]: time="2024-02-08T23:53:02.295179954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:53:02.295482 env[1139]: time="2024-02-08T23:53:02.295448859Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8 pid=4290 runtime=io.containerd.runc.v2 Feb 8 23:53:02.318286 kernel: kauditd_printk_skb: 141 callbacks suppressed Feb 8 23:53:02.326206 kernel: audit: type=1325 audit(1707436382.305:323): table=filter:126 family=2 entries=48 op=nft_register_chain pid=4296 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:53:02.326288 kernel: audit: type=1300 audit(1707436382.305:323): arch=c000003e syscall=46 success=yes exit=23548 a0=3 a1=7fff3b52f770 a2=0 a3=7fff3b52f75c items=0 ppid=3324 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:53:02.326407 kernel: audit: type=1327 audit(1707436382.305:323): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:53:02.305000 audit[4296]: NETFILTER_CFG table=filter:126 family=2 entries=48 op=nft_register_chain pid=4296 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:53:02.305000 audit[4296]: SYSCALL arch=c000003e syscall=46 success=yes exit=23548 a0=3 a1=7fff3b52f770 a2=0 a3=7fff3b52f75c items=0 ppid=3324 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:53:02.305000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:53:02.373965 env[1139]: time="2024-02-08T23:53:02.373889971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f9d567d-vn2vk,Uid:d775a679-abfc-4c4d-a44c-4e5893e5a899,Namespace:calico-system,Attempt:1,} returns sandbox id \"30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8\"" Feb 8 23:53:02.378864 env[1139]: time="2024-02-08T23:53:02.377875441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 8 23:53:03.102390 systemd[1]: run-containerd-runc-k8s.io-30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8-runc.ixQrmA.mount: Deactivated successfully. Feb 8 23:53:03.570691 systemd-networkd[1029]: cali769ddc0980e: Gained IPv6LL Feb 8 23:53:04.383012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount89148002.mount: Deactivated successfully. Feb 8 23:53:07.046216 env[1139]: time="2024-02-08T23:53:07.046177442Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:53:07.050936 env[1139]: time="2024-02-08T23:53:07.050865978Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:53:07.056126 env[1139]: time="2024-02-08T23:53:07.056102946Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:53:07.061039 env[1139]: time="2024-02-08T23:53:07.060974227Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:53:07.062843 env[1139]: time="2024-02-08T23:53:07.062815079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803\"" Feb 8 23:53:07.100555 env[1139]: time="2024-02-08T23:53:07.100509453Z" level=info msg="CreateContainer within sandbox \"30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 8 23:53:07.122547 env[1139]: time="2024-02-08T23:53:07.122506799Z" level=info msg="CreateContainer within sandbox \"30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f706f26dbf540a674496c77c24f51df5b923036da2f6384b022f61e150525f5a\"" Feb 8 23:53:07.123388 env[1139]: time="2024-02-08T23:53:07.123367367Z" level=info msg="StartContainer for \"f706f26dbf540a674496c77c24f51df5b923036da2f6384b022f61e150525f5a\"" Feb 8 23:53:07.209206 env[1139]: time="2024-02-08T23:53:07.209150742Z" level=info msg="StartContainer for \"f706f26dbf540a674496c77c24f51df5b923036da2f6384b022f61e150525f5a\" returns successfully" Feb 8 23:53:07.371516 kubelet[2113]: I0208 23:53:07.371360 2113 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78f9d567d-vn2vk" podStartSLOduration=-9.223371982483452e+09 pod.CreationTimestamp="2024-02-08 23:52:13 +0000 UTC" firstStartedPulling="2024-02-08 23:53:02.375713243 +0000 UTC m=+68.808330231" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:53:07.267409758 +0000 UTC m=+73.700026706" watchObservedRunningTime="2024-02-08 23:53:07.371323538 +0000 UTC m=+73.803940476" Feb 8 23:53:10.007577 systemd[1]: Started sshd@7-172.24.4.64:22-172.24.4.1:50760.service. Feb 8 23:53:10.021857 kernel: audit: type=1130 audit(1707436390.008:324): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.24.4.64:22-172.24.4.1:50760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:10.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.24.4.64:22-172.24.4.1:50760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:11.547000 audit[4385]: USER_ACCT pid=4385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:11.560710 sshd[4385]: Accepted publickey for core from 172.24.4.1 port 50760 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:53:11.572870 kernel: audit: type=1101 audit(1707436391.547:325): pid=4385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:11.572989 kernel: audit: type=1103 audit(1707436391.559:326): pid=4385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:11.559000 audit[4385]: CRED_ACQ pid=4385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:11.581109 kernel: audit: type=1006 audit(1707436391.559:327): pid=4385 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Feb 8 23:53:11.581239 kernel: audit: type=1300 audit(1707436391.559:327): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff44f4f440 a2=3 a3=0 items=0 ppid=1 pid=4385 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:53:11.559000 audit[4385]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff44f4f440 a2=3 a3=0 items=0 ppid=1 pid=4385 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:53:11.564193 sshd[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:53:11.559000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:53:11.595826 systemd[1]: Started session-8.scope. Feb 8 23:53:11.597353 kernel: audit: type=1327 audit(1707436391.559:327): proctitle=737368643A20636F7265205B707269765D Feb 8 23:53:11.597579 systemd-logind[1126]: New session 8 of user core. Feb 8 23:53:11.613000 audit[4385]: USER_START pid=4385 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:11.613000 audit[4395]: CRED_ACQ pid=4395 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:11.633273 kernel: audit: type=1105 audit(1707436391.613:328): pid=4385 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:11.633484 kernel: audit: type=1103 audit(1707436391.613:329): pid=4395 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:12.832585 sshd[4385]: pam_unix(sshd:session): session closed for user core Feb 8 23:53:12.835000 audit[4385]: USER_END pid=4385 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:12.850467 kernel: audit: type=1106 audit(1707436392.835:330): pid=4385 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:12.836000 audit[4385]: CRED_DISP pid=4385 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:12.861335 kernel: audit: type=1104 audit(1707436392.836:331): pid=4385 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:12.873095 systemd-logind[1126]: Session 8 logged out. Waiting for processes to exit. Feb 8 23:53:12.874853 systemd[1]: sshd@7-172.24.4.64:22-172.24.4.1:50760.service: Deactivated successfully. Feb 8 23:53:12.877195 systemd[1]: session-8.scope: Deactivated successfully. Feb 8 23:53:12.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.24.4.64:22-172.24.4.1:50760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:12.879777 systemd-logind[1126]: Removed session 8. Feb 8 23:53:17.841731 systemd[1]: Started sshd@8-172.24.4.64:22-172.24.4.1:57976.service. Feb 8 23:53:17.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.24.4.64:22-172.24.4.1:57976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:17.846109 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:53:17.846228 kernel: audit: type=1130 audit(1707436397.843:333): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.24.4.64:22-172.24.4.1:57976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:17.934832 systemd[1]: run-containerd-runc-k8s.io-5dc66cc8bf05b6a49fa5ff2880a81eba1ba05adffc29ddc0b695b4bc47e681b8-runc.rVqxrC.mount: Deactivated successfully. Feb 8 23:53:19.280000 audit[4408]: USER_ACCT pid=4408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:19.282251 sshd[4408]: Accepted publickey for core from 172.24.4.1 port 57976 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:53:19.292490 kernel: audit: type=1101 audit(1707436399.280:334): pid=4408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:19.314000 audit[4408]: CRED_ACQ pid=4408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:19.332652 kernel: audit: type=1103 audit(1707436399.314:335): pid=4408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:19.332836 kernel: audit: type=1006 audit(1707436399.314:336): pid=4408 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Feb 8 23:53:19.332901 kernel: audit: type=1300 audit(1707436399.314:336): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf9012300 a2=3 a3=0 items=0 ppid=1 pid=4408 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:53:19.314000 audit[4408]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf9012300 a2=3 a3=0 items=0 ppid=1 pid=4408 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:53:19.314000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:53:19.349371 kernel: audit: type=1327 audit(1707436399.314:336): proctitle=737368643A20636F7265205B707269765D Feb 8 23:53:19.378852 sshd[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:53:19.394539 systemd-logind[1126]: New session 9 of user core. Feb 8 23:53:19.396265 systemd[1]: Started session-9.scope. Feb 8 23:53:19.418000 audit[4408]: USER_START pid=4408 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:19.432685 kernel: audit: type=1105 audit(1707436399.418:337): pid=4408 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:19.431000 audit[4432]: CRED_ACQ pid=4432 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:19.439544 kernel: audit: type=1103 audit(1707436399.431:338): pid=4432 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:20.043896 sshd[4408]: pam_unix(sshd:session): session closed for user core Feb 8 23:53:20.045000 audit[4408]: USER_END pid=4408 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:20.050100 systemd[1]: sshd@8-172.24.4.64:22-172.24.4.1:57976.service: Deactivated successfully. Feb 8 23:53:20.051951 systemd[1]: session-9.scope: Deactivated successfully. Feb 8 23:53:20.059374 kernel: audit: type=1106 audit(1707436400.045:339): pid=4408 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:20.046000 audit[4408]: CRED_DISP pid=4408 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:20.069704 systemd-logind[1126]: Session 9 logged out. Waiting for processes to exit. Feb 8 23:53:20.070392 kernel: audit: type=1104 audit(1707436400.046:340): pid=4408 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:20.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.24.4.64:22-172.24.4.1:57976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:20.072096 systemd-logind[1126]: Removed session 9. Feb 8 23:53:25.051027 systemd[1]: Started sshd@9-172.24.4.64:22-172.24.4.1:44392.service. Feb 8 23:53:25.054082 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:53:25.054182 kernel: audit: type=1130 audit(1707436405.051:342): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.24.4.64:22-172.24.4.1:44392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:25.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.24.4.64:22-172.24.4.1:44392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:26.502000 audit[4447]: USER_ACCT pid=4447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:26.503080 sshd[4447]: Accepted publickey for core from 172.24.4.1 port 44392 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:53:26.513414 kernel: audit: type=1101 audit(1707436406.502:343): pid=4447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:26.513000 audit[4447]: CRED_ACQ pid=4447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:26.515081 sshd[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:53:26.524717 kernel: audit: type=1103 audit(1707436406.513:344): pid=4447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:26.524832 kernel: audit: type=1006 audit(1707436406.514:345): pid=4447 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Feb 8 23:53:26.514000 audit[4447]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd43265420 a2=3 a3=0 items=0 ppid=1 pid=4447 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:53:26.551720 kernel: audit: type=1300 audit(1707436406.514:345): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd43265420 a2=3 a3=0 items=0 ppid=1 pid=4447 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:53:26.552842 kernel: audit: type=1327 audit(1707436406.514:345): proctitle=737368643A20636F7265205B707269765D Feb 8 23:53:26.514000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:53:26.562411 systemd-logind[1126]: New session 10 of user core. Feb 8 23:53:26.562860 systemd[1]: Started session-10.scope. Feb 8 23:53:26.577000 audit[4447]: USER_START pid=4447 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:26.590369 kernel: audit: type=1105 audit(1707436406.577:346): pid=4447 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:26.590000 audit[4450]: CRED_ACQ pid=4450 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:26.596428 kernel: audit: type=1103 audit(1707436406.590:347): pid=4450 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:27.617689 sshd[4447]: pam_unix(sshd:session): session closed for user core Feb 8 23:53:27.619000 audit[4447]: USER_END pid=4447 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:27.620000 audit[4447]: CRED_DISP pid=4447 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:27.633028 systemd[1]: sshd@9-172.24.4.64:22-172.24.4.1:44392.service: Deactivated successfully. Feb 8 23:53:27.635413 systemd[1]: session-10.scope: Deactivated successfully. Feb 8 23:53:27.641557 kernel: audit: type=1106 audit(1707436407.619:348): pid=4447 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:27.641707 kernel: audit: type=1104 audit(1707436407.620:349): pid=4447 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:27.643482 systemd-logind[1126]: Session 10 logged out. Waiting for processes to exit. Feb 8 23:53:27.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.24.4.64:22-172.24.4.1:44392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:27.645755 systemd-logind[1126]: Removed session 10. Feb 8 23:53:32.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.24.4.64:22-172.24.4.1:44408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:32.625441 systemd[1]: Started sshd@10-172.24.4.64:22-172.24.4.1:44408.service. Feb 8 23:53:32.662598 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:53:32.662717 kernel: audit: type=1130 audit(1707436412.625:351): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.24.4.64:22-172.24.4.1:44408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:33.985000 audit[4468]: USER_ACCT pid=4468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:33.986745 sshd[4468]: Accepted publickey for core from 172.24.4.1 port 44408 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:53:33.997348 kernel: audit: type=1101 audit(1707436413.985:352): pid=4468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:33.997000 audit[4468]: CRED_ACQ pid=4468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:33.999062 sshd[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:53:34.011679 kernel: audit: type=1103 audit(1707436413.997:353): pid=4468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:34.011895 kernel: audit: type=1006 audit(1707436413.998:354): pid=4468 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 8 23:53:33.998000 audit[4468]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc28d07c40 a2=3 a3=0 items=0 ppid=1 pid=4468 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:53:33.998000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:53:34.032738 kernel: audit: type=1300 audit(1707436413.998:354): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc28d07c40 a2=3 a3=0 items=0 ppid=1 pid=4468 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:53:34.032942 kernel: audit: type=1327 audit(1707436413.998:354): proctitle=737368643A20636F7265205B707269765D Feb 8 23:53:34.036619 systemd[1]: Started session-11.scope. Feb 8 23:53:34.037084 systemd-logind[1126]: New session 11 of user core. Feb 8 23:53:34.055000 audit[4468]: USER_START pid=4468 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:34.067342 kernel: audit: type=1105 audit(1707436414.055:355): pid=4468 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:34.057000 audit[4471]: CRED_ACQ pid=4471 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:34.073315 kernel: audit: type=1103 audit(1707436414.057:356): pid=4471 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:34.833679 sshd[4468]: pam_unix(sshd:session): session closed for user core Feb 8 23:53:34.837000 audit[4468]: USER_END pid=4468 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:34.850367 kernel: audit: type=1106 audit(1707436414.837:357): pid=4468 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:34.852136 systemd[1]: Started sshd@11-172.24.4.64:22-172.24.4.1:48250.service. Feb 8 23:53:34.853357 systemd[1]: sshd@10-172.24.4.64:22-172.24.4.1:44408.service: Deactivated successfully. Feb 8 23:53:34.860065 systemd[1]: session-11.scope: Deactivated successfully. Feb 8 23:53:34.861483 systemd-logind[1126]: Session 11 logged out. Waiting for processes to exit. Feb 8 23:53:34.837000 audit[4468]: CRED_DISP pid=4468 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:34.875406 kernel: audit: type=1104 audit(1707436414.837:358): pid=4468 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:34.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.24.4.64:22-172.24.4.1:48250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:34.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.24.4.64:22-172.24.4.1:44408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:34.877488 systemd-logind[1126]: Removed session 11. Feb 8 23:53:36.194000 audit[4479]: USER_ACCT pid=4479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:36.196600 sshd[4479]: Accepted publickey for core from 172.24.4.1 port 48250 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:53:36.199000 audit[4479]: CRED_ACQ pid=4479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:36.200000 audit[4479]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb452fef0 a2=3 a3=0 items=0 ppid=1 pid=4479 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:53:36.200000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:53:36.202253 sshd[4479]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:53:36.217530 systemd[1]: Started session-12.scope. Feb 8 23:53:36.218019 systemd-logind[1126]: New session 12 of user core. Feb 8 23:53:36.233000 audit[4479]: USER_START pid=4479 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:36.236000 audit[4484]: CRED_ACQ pid=4484 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:36.456657 systemd[1]: run-containerd-runc-k8s.io-f706f26dbf540a674496c77c24f51df5b923036da2f6384b022f61e150525f5a-runc.RdWtZr.mount: Deactivated successfully. Feb 8 23:53:37.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.24.4.64:22-172.24.4.1:48266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:37.458465 systemd[1]: Started sshd@12-172.24.4.64:22-172.24.4.1:48266.service. Feb 8 23:53:37.465020 sshd[4479]: pam_unix(sshd:session): session closed for user core Feb 8 23:53:37.466000 audit[4479]: USER_END pid=4479 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:37.467000 audit[4479]: CRED_DISP pid=4479 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:37.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.24.4.64:22-172.24.4.1:48250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:37.471483 systemd[1]: sshd@11-172.24.4.64:22-172.24.4.1:48250.service: Deactivated successfully. Feb 8 23:53:37.473076 systemd[1]: session-12.scope: Deactivated successfully. Feb 8 23:53:37.473709 systemd-logind[1126]: Session 12 logged out. Waiting for processes to exit. Feb 8 23:53:37.475264 systemd-logind[1126]: Removed session 12. Feb 8 23:53:38.757640 kernel: kauditd_printk_skb: 13 callbacks suppressed Feb 8 23:53:38.757926 kernel: audit: type=1101 audit(1707436418.753:370): pid=4508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:38.753000 audit[4508]: USER_ACCT pid=4508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:38.767787 sshd[4508]: Accepted publickey for core from 172.24.4.1 port 48266 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:53:38.767000 audit[4508]: CRED_ACQ pid=4508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:38.780270 sshd[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:53:38.786180 kernel: audit: type=1103 audit(1707436418.767:371): pid=4508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:38.786461 kernel: audit: type=1006 audit(1707436418.767:372): pid=4508 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Feb 8 23:53:38.786616 kernel: audit: type=1300 audit(1707436418.767:372): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe258060f0 a2=3 a3=0 items=0 ppid=1 pid=4508 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:53:38.767000 audit[4508]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe258060f0 a2=3 a3=0 items=0 ppid=1 pid=4508 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:53:38.767000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:53:38.802357 kernel: audit: type=1327 audit(1707436418.767:372): proctitle=737368643A20636F7265205B707269765D Feb 8 23:53:38.813508 systemd-logind[1126]: New session 13 of user core. Feb 8 23:53:38.815020 systemd[1]: Started session-13.scope. Feb 8 23:53:38.829000 audit[4508]: USER_START pid=4508 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:38.844355 kernel: audit: type=1105 audit(1707436418.829:373): pid=4508 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:38.830000 audit[4515]: CRED_ACQ pid=4515 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:38.855340 kernel: audit: type=1103 audit(1707436418.830:374): pid=4515 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:39.567712 sshd[4508]: pam_unix(sshd:session): session closed for user core Feb 8 23:53:39.568000 audit[4508]: USER_END pid=4508 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:39.585346 kernel: audit: type=1106 audit(1707436419.568:375): pid=4508 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:39.585574 kernel: audit: type=1104 audit(1707436419.569:376): pid=4508 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:39.569000 audit[4508]: CRED_DISP pid=4508 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:39.584386 systemd[1]: sshd@12-172.24.4.64:22-172.24.4.1:48266.service: Deactivated successfully. Feb 8 23:53:39.586827 systemd[1]: session-13.scope: Deactivated successfully. Feb 8 23:53:39.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.24.4.64:22-172.24.4.1:48266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:39.604517 kernel: audit: type=1131 audit(1707436419.582:377): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.24.4.64:22-172.24.4.1:48266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:39.603820 systemd-logind[1126]: Session 13 logged out. Waiting for processes to exit. Feb 8 23:53:39.606538 systemd-logind[1126]: Removed session 13. Feb 8 23:53:44.576271 systemd[1]: Started sshd@13-172.24.4.64:22-172.24.4.1:39694.service. Feb 8 23:53:44.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.24.4.64:22-172.24.4.1:39694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:44.587404 kernel: audit: type=1130 audit(1707436424.581:378): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.24.4.64:22-172.24.4.1:39694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:46.001000 audit[4549]: USER_ACCT pid=4549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:46.003463 sshd[4549]: Accepted publickey for core from 172.24.4.1 port 39694 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:53:46.006000 audit[4549]: CRED_ACQ pid=4549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:46.009072 sshd[4549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:53:46.012425 kernel: audit: type=1101 audit(1707436426.001:379): pid=4549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:46.012531 kernel: audit: type=1103 audit(1707436426.006:380): pid=4549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:46.007000 audit[4549]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc869bfe70 a2=3 a3=0 items=0 ppid=1 pid=4549 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:53:46.021426 kernel: audit: type=1006 audit(1707436426.007:381): pid=4549 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Feb 8 23:53:46.021587 kernel: audit: type=1300 audit(1707436426.007:381): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc869bfe70 a2=3 a3=0 items=0 ppid=1 pid=4549 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:53:46.007000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:53:46.026985 kernel: audit: type=1327 audit(1707436426.007:381): proctitle=737368643A20636F7265205B707269765D Feb 8 23:53:46.026802 systemd[1]: Started session-14.scope. Feb 8 23:53:46.028142 systemd-logind[1126]: New session 14 of user core. Feb 8 23:53:46.034000 audit[4549]: USER_START pid=4549 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:46.042330 kernel: audit: type=1105 audit(1707436426.034:382): pid=4549 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:46.040000 audit[4552]: CRED_ACQ pid=4552 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:46.047373 kernel: audit: type=1103 audit(1707436426.040:383): pid=4552 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:46.819094 sshd[4549]: pam_unix(sshd:session): session closed for user core Feb 8 23:53:46.819000 audit[4549]: USER_END pid=4549 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:46.821779 systemd[1]: sshd@13-172.24.4.64:22-172.24.4.1:39694.service: Deactivated successfully. Feb 8 23:53:46.822597 systemd[1]: session-14.scope: Deactivated successfully. Feb 8 23:53:46.819000 audit[4549]: CRED_DISP pid=4549 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:46.829622 kernel: audit: type=1106 audit(1707436426.819:384): pid=4549 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:46.829723 kernel: audit: type=1104 audit(1707436426.819:385): pid=4549 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:46.829654 systemd-logind[1126]: Session 14 logged out. Waiting for processes to exit. Feb 8 23:53:46.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.24.4.64:22-172.24.4.1:39694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:46.830683 systemd-logind[1126]: Removed session 14. Feb 8 23:53:47.918158 systemd[1]: run-containerd-runc-k8s.io-5dc66cc8bf05b6a49fa5ff2880a81eba1ba05adffc29ddc0b695b4bc47e681b8-runc.Bohl41.mount: Deactivated successfully. Feb 8 23:53:51.828713 systemd[1]: Started sshd@14-172.24.4.64:22-172.24.4.1:39696.service. Feb 8 23:53:51.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.24.4.64:22-172.24.4.1:39696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:51.833339 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:53:51.833468 kernel: audit: type=1130 audit(1707436431.828:387): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.24.4.64:22-172.24.4.1:39696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:53.002000 audit[4587]: USER_ACCT pid=4587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:53.015230 sshd[4587]: Accepted publickey for core from 172.24.4.1 port 39696 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:53:53.015900 kernel: audit: type=1101 audit(1707436433.002:388): pid=4587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:53.015967 sshd[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:53:53.013000 audit[4587]: CRED_ACQ pid=4587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:53.033853 kernel: audit: type=1103 audit(1707436433.013:389): pid=4587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:53.033988 kernel: audit: type=1006 audit(1707436433.013:390): pid=4587 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Feb 8 23:53:53.034603 kernel: audit: type=1300 audit(1707436433.013:390): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd009be7f0 a2=3 a3=0 items=0 ppid=1 pid=4587 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:53:53.013000 audit[4587]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd009be7f0 a2=3 a3=0 items=0 ppid=1 pid=4587 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:53:53.013000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:53:53.052456 kernel: audit: type=1327 audit(1707436433.013:390): proctitle=737368643A20636F7265205B707269765D Feb 8 23:53:53.055123 systemd-logind[1126]: New session 15 of user core. Feb 8 23:53:53.060414 systemd[1]: Started session-15.scope. Feb 8 23:53:53.072000 audit[4587]: USER_START pid=4587 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:53.079390 kernel: audit: type=1105 audit(1707436433.072:391): pid=4587 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:53.079000 audit[4591]: CRED_ACQ pid=4591 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:53.085377 kernel: audit: type=1103 audit(1707436433.079:392): pid=4591 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:53.805112 sshd[4587]: pam_unix(sshd:session): session closed for user core Feb 8 23:53:53.806000 audit[4587]: USER_END pid=4587 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:53.811228 systemd[1]: sshd@14-172.24.4.64:22-172.24.4.1:39696.service: Deactivated successfully. Feb 8 23:53:53.813029 systemd[1]: session-15.scope: Deactivated successfully. Feb 8 23:53:53.829394 kernel: audit: type=1106 audit(1707436433.806:393): pid=4587 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:53.806000 audit[4587]: CRED_DISP pid=4587 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:53.842198 kernel: audit: type=1104 audit(1707436433.806:394): pid=4587 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:53:53.840957 systemd-logind[1126]: Session 15 logged out. Waiting for processes to exit. Feb 8 23:53:53.843083 systemd-logind[1126]: Removed session 15. Feb 8 23:53:53.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.24.4.64:22-172.24.4.1:39696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:54.455743 env[1139]: time="2024-02-08T23:53:54.455619162Z" level=info msg="StopPodSandbox for \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\"" Feb 8 23:53:54.755633 env[1139]: 2024-02-08 23:53:54.621 [WARNING][4617] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0", GenerateName:"calico-kube-controllers-78f9d567d-", Namespace:"calico-system", SelfLink:"", UID:"d775a679-abfc-4c4d-a44c-4e5893e5a899", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 52, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f9d567d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8", Pod:"calico-kube-controllers-78f9d567d-vn2vk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali769ddc0980e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:53:54.755633 env[1139]: 2024-02-08 23:53:54.621 [INFO][4617] k8s.go 578: Cleaning up netns ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Feb 8 23:53:54.755633 env[1139]: 2024-02-08 23:53:54.625 [INFO][4617] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" iface="eth0" netns="" Feb 8 23:53:54.755633 env[1139]: 2024-02-08 23:53:54.625 [INFO][4617] k8s.go 585: Releasing IP address(es) ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Feb 8 23:53:54.755633 env[1139]: 2024-02-08 23:53:54.625 [INFO][4617] utils.go 188: Calico CNI releasing IP address ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Feb 8 23:53:54.755633 env[1139]: 2024-02-08 23:53:54.722 [INFO][4623] ipam_plugin.go 415: Releasing address using handleID ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" HandleID="k8s-pod-network.a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" Feb 8 23:53:54.755633 env[1139]: 2024-02-08 23:53:54.722 [INFO][4623] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:53:54.755633 env[1139]: 2024-02-08 23:53:54.723 [INFO][4623] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:53:54.755633 env[1139]: 2024-02-08 23:53:54.742 [WARNING][4623] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" HandleID="k8s-pod-network.a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" Feb 8 23:53:54.755633 env[1139]: 2024-02-08 23:53:54.742 [INFO][4623] ipam_plugin.go 443: Releasing address using workloadID ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" HandleID="k8s-pod-network.a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" Feb 8 23:53:54.755633 env[1139]: 2024-02-08 23:53:54.746 [INFO][4623] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:53:54.755633 env[1139]: 2024-02-08 23:53:54.750 [INFO][4617] k8s.go 591: Teardown processing complete. ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Feb 8 23:53:54.757166 env[1139]: time="2024-02-08T23:53:54.757101109Z" level=info msg="TearDown network for sandbox \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\" successfully" Feb 8 23:53:54.757392 env[1139]: time="2024-02-08T23:53:54.757343098Z" level=info msg="StopPodSandbox for \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\" returns successfully" Feb 8 23:53:54.759661 env[1139]: time="2024-02-08T23:53:54.759605917Z" level=info msg="RemovePodSandbox for \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\"" Feb 8 23:53:54.760169 env[1139]: time="2024-02-08T23:53:54.760055149Z" level=info msg="Forcibly stopping sandbox \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\"" Feb 8 23:53:54.899824 env[1139]: 2024-02-08 23:53:54.840 [WARNING][4642] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0", GenerateName:"calico-kube-controllers-78f9d567d-", Namespace:"calico-system", SelfLink:"", UID:"d775a679-abfc-4c4d-a44c-4e5893e5a899", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 52, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f9d567d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"30b97813c4a36c6c0584c97af5da50147a38054000c67372ad103ddbae480bf8", Pod:"calico-kube-controllers-78f9d567d-vn2vk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali769ddc0980e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:53:54.899824 env[1139]: 2024-02-08 23:53:54.841 [INFO][4642] k8s.go 578: Cleaning up netns ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Feb 8 23:53:54.899824 env[1139]: 2024-02-08 23:53:54.841 [INFO][4642] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" iface="eth0" netns="" Feb 8 23:53:54.899824 env[1139]: 2024-02-08 23:53:54.841 [INFO][4642] k8s.go 585: Releasing IP address(es) ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Feb 8 23:53:54.899824 env[1139]: 2024-02-08 23:53:54.841 [INFO][4642] utils.go 188: Calico CNI releasing IP address ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Feb 8 23:53:54.899824 env[1139]: 2024-02-08 23:53:54.875 [INFO][4648] ipam_plugin.go 415: Releasing address using handleID ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" HandleID="k8s-pod-network.a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" Feb 8 23:53:54.899824 env[1139]: 2024-02-08 23:53:54.876 [INFO][4648] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:53:54.899824 env[1139]: 2024-02-08 23:53:54.876 [INFO][4648] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:53:54.899824 env[1139]: 2024-02-08 23:53:54.893 [WARNING][4648] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" HandleID="k8s-pod-network.a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" Feb 8 23:53:54.899824 env[1139]: 2024-02-08 23:53:54.893 [INFO][4648] ipam_plugin.go 443: Releasing address using workloadID ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" HandleID="k8s-pod-network.a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--kube--controllers--78f9d567d--vn2vk-eth0" Feb 8 23:53:54.899824 env[1139]: 2024-02-08 23:53:54.895 [INFO][4648] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:53:54.899824 env[1139]: 2024-02-08 23:53:54.897 [INFO][4642] k8s.go 591: Teardown processing complete. ContainerID="a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd" Feb 8 23:53:54.900864 env[1139]: time="2024-02-08T23:53:54.900795879Z" level=info msg="TearDown network for sandbox \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\" successfully" Feb 8 23:53:54.905988 env[1139]: time="2024-02-08T23:53:54.905895527Z" level=info msg="RemovePodSandbox \"a18751faf5742c037f10af3b34d8e32a29a6c8867de6f2b6b3b553ea831b03fd\" returns successfully" Feb 8 23:53:58.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.24.4.64:22-172.24.4.1:56058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:58.813634 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:53:58.813712 kernel: audit: type=1130 audit(1707436438.809:396): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.24.4.64:22-172.24.4.1:56058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:58.810812 systemd[1]: Started sshd@15-172.24.4.64:22-172.24.4.1:56058.service. Feb 8 23:54:00.252000 audit[4655]: USER_ACCT pid=4655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:00.253953 sshd[4655]: Accepted publickey for core from 172.24.4.1 port 56058 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:54:00.264357 kernel: audit: type=1101 audit(1707436440.252:397): pid=4655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:00.274949 kernel: audit: type=1103 audit(1707436440.263:398): pid=4655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:00.263000 audit[4655]: CRED_ACQ pid=4655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:00.265567 sshd[4655]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:54:00.276644 kernel: audit: type=1006 audit(1707436440.263:399): pid=4655 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Feb 8 23:54:00.263000 audit[4655]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffedc44d920 a2=3 a3=0 items=0 ppid=1 pid=4655 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:00.293390 kernel: audit: type=1300 audit(1707436440.263:399): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffedc44d920 a2=3 a3=0 items=0 ppid=1 pid=4655 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:00.263000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:54:00.298471 kernel: audit: type=1327 audit(1707436440.263:399): proctitle=737368643A20636F7265205B707269765D Feb 8 23:54:00.304875 systemd-logind[1126]: New session 16 of user core. Feb 8 23:54:00.306122 systemd[1]: Started session-16.scope. Feb 8 23:54:00.337465 kernel: audit: type=1105 audit(1707436440.322:400): pid=4655 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:00.322000 audit[4655]: USER_START pid=4655 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:00.324000 audit[4658]: CRED_ACQ pid=4658 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:00.347339 kernel: audit: type=1103 audit(1707436440.324:401): pid=4658 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:01.104677 sshd[4655]: pam_unix(sshd:session): session closed for user core Feb 8 23:54:01.108000 audit[4655]: USER_END pid=4655 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:01.109000 audit[4655]: CRED_DISP pid=4655 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:01.124129 systemd[1]: Started sshd@16-172.24.4.64:22-172.24.4.1:56074.service. Feb 8 23:54:01.128060 systemd[1]: sshd@15-172.24.4.64:22-172.24.4.1:56058.service: Deactivated successfully. Feb 8 23:54:01.131074 systemd[1]: session-16.scope: Deactivated successfully. Feb 8 23:54:01.132753 kernel: audit: type=1106 audit(1707436441.108:402): pid=4655 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:01.132899 kernel: audit: type=1104 audit(1707436441.109:403): pid=4655 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:01.135744 systemd-logind[1126]: Session 16 logged out. Waiting for processes to exit. Feb 8 23:54:01.140212 systemd-logind[1126]: Removed session 16. Feb 8 23:54:01.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.24.4.64:22-172.24.4.1:56074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:01.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.24.4.64:22-172.24.4.1:56058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:02.476000 audit[4666]: USER_ACCT pid=4666 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:02.478714 sshd[4666]: Accepted publickey for core from 172.24.4.1 port 56074 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:54:02.481000 audit[4666]: CRED_ACQ pid=4666 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:02.482000 audit[4666]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffefbc96cf0 a2=3 a3=0 items=0 ppid=1 pid=4666 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:02.482000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:54:02.485159 sshd[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:54:02.499432 systemd[1]: Started session-17.scope. Feb 8 23:54:02.500678 systemd-logind[1126]: New session 17 of user core. Feb 8 23:54:02.515000 audit[4666]: USER_START pid=4666 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:02.519000 audit[4670]: CRED_ACQ pid=4670 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:04.680535 kernel: kauditd_printk_skb: 9 callbacks suppressed Feb 8 23:54:04.680770 kernel: audit: type=1130 audit(1707436444.668:411): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.24.4.64:22-172.24.4.1:33416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:04.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.24.4.64:22-172.24.4.1:33416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:04.669109 systemd[1]: Started sshd@17-172.24.4.64:22-172.24.4.1:33416.service. Feb 8 23:54:04.665399 sshd[4666]: pam_unix(sshd:session): session closed for user core Feb 8 23:54:04.677396 systemd[1]: sshd@16-172.24.4.64:22-172.24.4.1:56074.service: Deactivated successfully. Feb 8 23:54:04.679748 systemd[1]: session-17.scope: Deactivated successfully. Feb 8 23:54:04.671000 audit[4666]: USER_END pid=4666 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:04.688892 systemd-logind[1126]: Session 17 logged out. Waiting for processes to exit. Feb 8 23:54:04.671000 audit[4666]: CRED_DISP pid=4666 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:04.691919 systemd-logind[1126]: Removed session 17. Feb 8 23:54:04.696195 kernel: audit: type=1106 audit(1707436444.671:412): pid=4666 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:04.696419 kernel: audit: type=1104 audit(1707436444.671:413): pid=4666 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:04.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.24.4.64:22-172.24.4.1:56074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:04.701464 kernel: audit: type=1131 audit(1707436444.676:414): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.24.4.64:22-172.24.4.1:56074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:05.890000 audit[4677]: USER_ACCT pid=4677 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:05.903468 kernel: audit: type=1101 audit(1707436445.890:415): pid=4677 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:05.903556 sshd[4677]: Accepted publickey for core from 172.24.4.1 port 33416 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:54:05.901000 audit[4677]: CRED_ACQ pid=4677 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:05.914359 kernel: audit: type=1103 audit(1707436445.901:416): pid=4677 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:05.922026 kernel: audit: type=1006 audit(1707436445.902:417): pid=4677 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Feb 8 23:54:05.922141 kernel: audit: type=1300 audit(1707436445.902:417): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd2ef7c1d0 a2=3 a3=0 items=0 ppid=1 pid=4677 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:05.902000 audit[4677]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd2ef7c1d0 a2=3 a3=0 items=0 ppid=1 pid=4677 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:05.921721 sshd[4677]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:54:05.902000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:54:05.937395 kernel: audit: type=1327 audit(1707436445.902:417): proctitle=737368643A20636F7265205B707269765D Feb 8 23:54:05.946470 systemd-logind[1126]: New session 18 of user core. Feb 8 23:54:05.948982 systemd[1]: Started session-18.scope. Feb 8 23:54:05.967000 audit[4677]: USER_START pid=4677 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:05.982482 kernel: audit: type=1105 audit(1707436445.967:418): pid=4677 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:05.980000 audit[4682]: CRED_ACQ pid=4682 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:06.434367 systemd[1]: run-containerd-runc-k8s.io-f706f26dbf540a674496c77c24f51df5b923036da2f6384b022f61e150525f5a-runc.6xn6Zo.mount: Deactivated successfully. Feb 8 23:54:07.810000 audit[4737]: NETFILTER_CFG table=filter:127 family=2 entries=18 op=nft_register_rule pid=4737 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:07.810000 audit[4737]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7fffd035a050 a2=0 a3=7fffd035a03c items=0 ppid=2271 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:07.810000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:07.812000 audit[4737]: NETFILTER_CFG table=nat:128 family=2 entries=78 op=nft_register_rule pid=4737 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:07.812000 audit[4737]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7fffd035a050 a2=0 a3=7fffd035a03c items=0 ppid=2271 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:07.812000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:07.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.24.4.64:22-172.24.4.1:33426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:07.830265 systemd[1]: Started sshd@18-172.24.4.64:22-172.24.4.1:33426.service. Feb 8 23:54:07.840518 sshd[4677]: pam_unix(sshd:session): session closed for user core Feb 8 23:54:07.842000 audit[4677]: USER_END pid=4677 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:07.842000 audit[4677]: CRED_DISP pid=4677 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:07.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.24.4.64:22-172.24.4.1:33416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:07.848800 systemd[1]: sshd@17-172.24.4.64:22-172.24.4.1:33416.service: Deactivated successfully. Feb 8 23:54:07.850446 systemd[1]: session-18.scope: Deactivated successfully. Feb 8 23:54:07.851136 systemd-logind[1126]: Session 18 logged out. Waiting for processes to exit. Feb 8 23:54:07.853060 systemd-logind[1126]: Removed session 18. Feb 8 23:54:07.949000 audit[4766]: NETFILTER_CFG table=filter:129 family=2 entries=30 op=nft_register_rule pid=4766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:07.949000 audit[4766]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7fff56919260 a2=0 a3=7fff5691924c items=0 ppid=2271 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:07.949000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:07.952000 audit[4766]: NETFILTER_CFG table=nat:130 family=2 entries=78 op=nft_register_rule pid=4766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:07.952000 audit[4766]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7fff56919260 a2=0 a3=7fff5691924c items=0 ppid=2271 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:07.952000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:09.173000 audit[4743]: USER_ACCT pid=4743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:09.177149 sshd[4743]: Accepted publickey for core from 172.24.4.1 port 33426 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:54:09.176000 audit[4743]: CRED_ACQ pid=4743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:09.177000 audit[4743]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe41a9e0d0 a2=3 a3=0 items=0 ppid=1 pid=4743 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:09.177000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:54:09.179689 sshd[4743]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:54:09.192004 systemd[1]: Started session-19.scope. Feb 8 23:54:09.192553 systemd-logind[1126]: New session 19 of user core. Feb 8 23:54:09.205000 audit[4743]: USER_START pid=4743 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:09.210000 audit[4769]: CRED_ACQ pid=4769 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:10.413136 sshd[4743]: pam_unix(sshd:session): session closed for user core Feb 8 23:54:10.416000 audit[4743]: USER_END pid=4743 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:10.421973 systemd[1]: Started sshd@19-172.24.4.64:22-172.24.4.1:33432.service. Feb 8 23:54:10.422725 kernel: kauditd_printk_skb: 24 callbacks suppressed Feb 8 23:54:10.422852 kernel: audit: type=1106 audit(1707436450.416:433): pid=4743 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:10.424209 systemd[1]: sshd@18-172.24.4.64:22-172.24.4.1:33426.service: Deactivated successfully. Feb 8 23:54:10.417000 audit[4743]: CRED_DISP pid=4743 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:10.449386 kernel: audit: type=1104 audit(1707436450.417:434): pid=4743 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:10.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.24.4.64:22-172.24.4.1:33432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:10.450702 systemd[1]: session-19.scope: Deactivated successfully. Feb 8 23:54:10.460651 kernel: audit: type=1130 audit(1707436450.421:435): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.24.4.64:22-172.24.4.1:33432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:10.460980 systemd-logind[1126]: Session 19 logged out. Waiting for processes to exit. Feb 8 23:54:10.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.24.4.64:22-172.24.4.1:33426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:10.474404 kernel: audit: type=1131 audit(1707436450.435:436): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.24.4.64:22-172.24.4.1:33426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:10.475372 systemd-logind[1126]: Removed session 19. Feb 8 23:54:12.086718 sshd[4780]: Accepted publickey for core from 172.24.4.1 port 33432 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:54:12.086000 audit[4780]: USER_ACCT pid=4780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:12.100408 kernel: audit: type=1101 audit(1707436452.086:437): pid=4780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:12.100000 audit[4780]: CRED_ACQ pid=4780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:12.101636 sshd[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:54:12.112899 kernel: audit: type=1103 audit(1707436452.100:438): pid=4780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:12.113158 kernel: audit: type=1006 audit(1707436452.100:439): pid=4780 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Feb 8 23:54:12.100000 audit[4780]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcecf5cb0 a2=3 a3=0 items=0 ppid=1 pid=4780 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:12.130590 kernel: audit: type=1300 audit(1707436452.100:439): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcecf5cb0 a2=3 a3=0 items=0 ppid=1 pid=4780 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:12.100000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:54:12.135538 kernel: audit: type=1327 audit(1707436452.100:439): proctitle=737368643A20636F7265205B707269765D Feb 8 23:54:12.144811 systemd-logind[1126]: New session 20 of user core. Feb 8 23:54:12.147069 systemd[1]: Started session-20.scope. Feb 8 23:54:12.162000 audit[4780]: USER_START pid=4780 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:12.169356 kernel: audit: type=1105 audit(1707436452.162:440): pid=4780 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:12.169000 audit[4785]: CRED_ACQ pid=4785 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:12.667048 sshd[4780]: pam_unix(sshd:session): session closed for user core Feb 8 23:54:12.670000 audit[4780]: USER_END pid=4780 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:12.670000 audit[4780]: CRED_DISP pid=4780 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:12.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.24.4.64:22-172.24.4.1:33432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:12.675285 systemd[1]: sshd@19-172.24.4.64:22-172.24.4.1:33432.service: Deactivated successfully. Feb 8 23:54:12.679233 systemd-logind[1126]: Session 20 logged out. Waiting for processes to exit. Feb 8 23:54:12.681079 systemd[1]: session-20.scope: Deactivated successfully. Feb 8 23:54:12.687401 systemd-logind[1126]: Removed session 20. Feb 8 23:54:17.443352 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 8 23:54:17.443500 kernel: audit: type=1325 audit(1707436457.441:445): table=filter:131 family=2 entries=18 op=nft_register_rule pid=4820 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:17.441000 audit[4820]: NETFILTER_CFG table=filter:131 family=2 entries=18 op=nft_register_rule pid=4820 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:17.441000 audit[4820]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd2511bf00 a2=0 a3=7ffd2511beec items=0 ppid=2271 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:17.451472 kernel: audit: type=1300 audit(1707436457.441:445): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd2511bf00 a2=0 a3=7ffd2511beec items=0 ppid=2271 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:17.441000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:17.456327 kernel: audit: type=1327 audit(1707436457.441:445): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:17.459000 audit[4820]: NETFILTER_CFG table=nat:132 family=2 entries=162 op=nft_register_chain pid=4820 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:17.463326 kernel: audit: type=1325 audit(1707436457.459:446): table=nat:132 family=2 entries=162 op=nft_register_chain pid=4820 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:17.464213 kernel: audit: type=1300 audit(1707436457.459:446): arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffd2511bf00 a2=0 a3=7ffd2511beec items=0 ppid=2271 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:17.459000 audit[4820]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffd2511bf00 a2=0 a3=7ffd2511beec items=0 ppid=2271 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:17.459000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:17.471882 kernel: audit: type=1327 audit(1707436457.459:446): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:17.671977 systemd[1]: Started sshd@20-172.24.4.64:22-172.24.4.1:39390.service. Feb 8 23:54:17.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.24.4.64:22-172.24.4.1:39390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:17.677364 kernel: audit: type=1130 audit(1707436457.671:447): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.24.4.64:22-172.24.4.1:39390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:18.772024 sshd[4822]: Accepted publickey for core from 172.24.4.1 port 39390 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:54:18.771000 audit[4822]: USER_ACCT pid=4822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:18.786736 sshd[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:54:18.791391 kernel: audit: type=1101 audit(1707436458.771:448): pid=4822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:18.786000 audit[4822]: CRED_ACQ pid=4822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:18.813350 kernel: audit: type=1103 audit(1707436458.786:449): pid=4822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:18.817490 systemd[1]: Started session-21.scope. Feb 8 23:54:18.818377 systemd-logind[1126]: New session 21 of user core. Feb 8 23:54:18.786000 audit[4822]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcacfbaaf0 a2=3 a3=0 items=0 ppid=1 pid=4822 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:18.786000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:54:18.826000 audit[4822]: USER_START pid=4822 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:18.828000 audit[4846]: CRED_ACQ pid=4846 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:18.832355 kernel: audit: type=1006 audit(1707436458.786:450): pid=4822 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Feb 8 23:54:19.495835 sshd[4822]: pam_unix(sshd:session): session closed for user core Feb 8 23:54:19.498000 audit[4822]: USER_END pid=4822 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:19.498000 audit[4822]: CRED_DISP pid=4822 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:19.503156 systemd[1]: sshd@20-172.24.4.64:22-172.24.4.1:39390.service: Deactivated successfully. Feb 8 23:54:19.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.24.4.64:22-172.24.4.1:39390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:19.506580 systemd[1]: session-21.scope: Deactivated successfully. Feb 8 23:54:19.508001 systemd-logind[1126]: Session 21 logged out. Waiting for processes to exit. Feb 8 23:54:19.512062 systemd-logind[1126]: Removed session 21. Feb 8 23:54:22.718829 kubelet[2113]: I0208 23:54:22.718775 2113 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:54:22.788000 audit[4882]: NETFILTER_CFG table=filter:133 family=2 entries=7 op=nft_register_rule pid=4882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:22.789985 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 8 23:54:22.790038 kernel: audit: type=1325 audit(1707436462.788:456): table=filter:133 family=2 entries=7 op=nft_register_rule pid=4882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:22.788000 audit[4882]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fffd6460e00 a2=0 a3=7fffd6460dec items=0 ppid=2271 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:22.797968 kernel: audit: type=1300 audit(1707436462.788:456): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fffd6460e00 a2=0 a3=7fffd6460dec items=0 ppid=2271 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:22.788000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:22.800815 kernel: audit: type=1327 audit(1707436462.788:456): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:22.792000 audit[4882]: NETFILTER_CFG table=nat:134 family=2 entries=198 op=nft_register_rule pid=4882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:22.792000 audit[4882]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7fffd6460e00 a2=0 a3=7fffd6460dec items=0 ppid=2271 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:22.813448 kernel: audit: type=1325 audit(1707436462.792:457): table=nat:134 family=2 entries=198 op=nft_register_rule pid=4882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:22.813513 kernel: audit: type=1300 audit(1707436462.792:457): arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7fffd6460e00 a2=0 a3=7fffd6460dec items=0 ppid=2271 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:22.792000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:22.816332 kernel: audit: type=1327 audit(1707436462.792:457): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:22.825246 kubelet[2113]: I0208 23:54:22.825224 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/762de1bc-9e3a-4078-b97a-2e2dbc3cca87-calico-apiserver-certs\") pod \"calico-apiserver-65b7dbcd78-9rp72\" (UID: \"762de1bc-9e3a-4078-b97a-2e2dbc3cca87\") " pod="calico-apiserver/calico-apiserver-65b7dbcd78-9rp72" Feb 8 23:54:22.826822 kubelet[2113]: I0208 23:54:22.826801 2113 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64qqn\" (UniqueName: \"kubernetes.io/projected/762de1bc-9e3a-4078-b97a-2e2dbc3cca87-kube-api-access-64qqn\") pod \"calico-apiserver-65b7dbcd78-9rp72\" (UID: \"762de1bc-9e3a-4078-b97a-2e2dbc3cca87\") " pod="calico-apiserver/calico-apiserver-65b7dbcd78-9rp72" Feb 8 23:54:22.885000 audit[4908]: NETFILTER_CFG table=filter:135 family=2 entries=8 op=nft_register_rule pid=4908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:22.885000 audit[4908]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffdb723e650 a2=0 a3=7ffdb723e63c items=0 ppid=2271 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:22.894360 kernel: audit: type=1325 audit(1707436462.885:458): table=filter:135 family=2 entries=8 op=nft_register_rule pid=4908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:22.894417 kernel: audit: type=1300 audit(1707436462.885:458): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffdb723e650 a2=0 a3=7ffdb723e63c items=0 ppid=2271 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:22.895045 kernel: audit: type=1327 audit(1707436462.885:458): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:22.885000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:22.888000 audit[4908]: NETFILTER_CFG table=nat:136 family=2 entries=198 op=nft_register_rule pid=4908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:22.888000 audit[4908]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffdb723e650 a2=0 a3=7ffdb723e63c items=0 ppid=2271 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:22.888000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:22.901320 kernel: audit: type=1325 audit(1707436462.888:459): table=nat:136 family=2 entries=198 op=nft_register_rule pid=4908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:23.025908 env[1139]: time="2024-02-08T23:54:23.025742220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b7dbcd78-9rp72,Uid:762de1bc-9e3a-4078-b97a-2e2dbc3cca87,Namespace:calico-apiserver,Attempt:0,}" Feb 8 23:54:23.170045 systemd-networkd[1029]: cali5fb3e187460: Link UP Feb 8 23:54:23.171249 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:54:23.171450 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5fb3e187460: link becomes ready Feb 8 23:54:23.171572 systemd-networkd[1029]: cali5fb3e187460: Gained carrier Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.085 [INFO][4911] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--apiserver--65b7dbcd78--9rp72-eth0 calico-apiserver-65b7dbcd78- calico-apiserver 762de1bc-9e3a-4078-b97a-2e2dbc3cca87 1175 0 2024-02-08 23:54:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65b7dbcd78 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-2-a-bd3a159777.novalocal calico-apiserver-65b7dbcd78-9rp72 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5fb3e187460 [] []}} ContainerID="10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" Namespace="calico-apiserver" Pod="calico-apiserver-65b7dbcd78-9rp72" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--apiserver--65b7dbcd78--9rp72-" Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.085 [INFO][4911] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" Namespace="calico-apiserver" Pod="calico-apiserver-65b7dbcd78-9rp72" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--apiserver--65b7dbcd78--9rp72-eth0" Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.113 [INFO][4923] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" HandleID="k8s-pod-network.10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--apiserver--65b7dbcd78--9rp72-eth0" Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.127 [INFO][4923] ipam_plugin.go 268: Auto assigning IP ContainerID="10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" HandleID="k8s-pod-network.10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--apiserver--65b7dbcd78--9rp72-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fba10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-2-a-bd3a159777.novalocal", "pod":"calico-apiserver-65b7dbcd78-9rp72", "timestamp":"2024-02-08 23:54:23.113243611 +0000 UTC"}, Hostname:"ci-3510-3-2-a-bd3a159777.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.127 [INFO][4923] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.127 [INFO][4923] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.127 [INFO][4923] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-2-a-bd3a159777.novalocal' Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.130 [INFO][4923] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.138 [INFO][4923] ipam.go 372: Looking up existing affinities for host host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.143 [INFO][4923] ipam.go 489: Trying affinity for 192.168.52.64/26 host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.145 [INFO][4923] ipam.go 155: Attempting to load block cidr=192.168.52.64/26 host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.148 [INFO][4923] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.64/26 host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.148 [INFO][4923] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.64/26 handle="k8s-pod-network.10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.149 [INFO][4923] ipam.go 1682: Creating new handle: k8s-pod-network.10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5 Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.154 [INFO][4923] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.64/26 handle="k8s-pod-network.10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.163 [INFO][4923] ipam.go 1216: Successfully claimed IPs: [192.168.52.69/26] block=192.168.52.64/26 handle="k8s-pod-network.10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.163 [INFO][4923] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.69/26] handle="k8s-pod-network.10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" host="ci-3510-3-2-a-bd3a159777.novalocal" Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.163 [INFO][4923] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:54:23.192319 env[1139]: 2024-02-08 23:54:23.163 [INFO][4923] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.52.69/26] IPv6=[] ContainerID="10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" HandleID="k8s-pod-network.10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" Workload="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--apiserver--65b7dbcd78--9rp72-eth0" Feb 8 23:54:23.193012 env[1139]: 2024-02-08 23:54:23.165 [INFO][4911] k8s.go 385: Populated endpoint ContainerID="10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" Namespace="calico-apiserver" Pod="calico-apiserver-65b7dbcd78-9rp72" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--apiserver--65b7dbcd78--9rp72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--apiserver--65b7dbcd78--9rp72-eth0", GenerateName:"calico-apiserver-65b7dbcd78-", Namespace:"calico-apiserver", SelfLink:"", UID:"762de1bc-9e3a-4078-b97a-2e2dbc3cca87", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 54, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b7dbcd78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"", Pod:"calico-apiserver-65b7dbcd78-9rp72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5fb3e187460", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:54:23.193012 env[1139]: 2024-02-08 23:54:23.165 [INFO][4911] k8s.go 386: Calico CNI using IPs: [192.168.52.69/32] ContainerID="10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" Namespace="calico-apiserver" Pod="calico-apiserver-65b7dbcd78-9rp72" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--apiserver--65b7dbcd78--9rp72-eth0" Feb 8 23:54:23.193012 env[1139]: 2024-02-08 23:54:23.165 [INFO][4911] dataplane_linux.go 68: Setting the host side veth name to cali5fb3e187460 ContainerID="10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" Namespace="calico-apiserver" Pod="calico-apiserver-65b7dbcd78-9rp72" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--apiserver--65b7dbcd78--9rp72-eth0" Feb 8 23:54:23.193012 env[1139]: 2024-02-08 23:54:23.172 [INFO][4911] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" Namespace="calico-apiserver" Pod="calico-apiserver-65b7dbcd78-9rp72" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--apiserver--65b7dbcd78--9rp72-eth0" Feb 8 23:54:23.193012 env[1139]: 2024-02-08 23:54:23.172 [INFO][4911] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" Namespace="calico-apiserver" Pod="calico-apiserver-65b7dbcd78-9rp72" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--apiserver--65b7dbcd78--9rp72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--apiserver--65b7dbcd78--9rp72-eth0", GenerateName:"calico-apiserver-65b7dbcd78-", Namespace:"calico-apiserver", SelfLink:"", UID:"762de1bc-9e3a-4078-b97a-2e2dbc3cca87", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 54, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b7dbcd78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-a-bd3a159777.novalocal", ContainerID:"10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5", Pod:"calico-apiserver-65b7dbcd78-9rp72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5fb3e187460", MAC:"16:c8:c5:b4:77:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:54:23.193012 env[1139]: 2024-02-08 23:54:23.185 [INFO][4911] k8s.go 491: Wrote updated endpoint to datastore ContainerID="10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5" Namespace="calico-apiserver" Pod="calico-apiserver-65b7dbcd78-9rp72" WorkloadEndpoint="ci--3510--3--2--a--bd3a159777.novalocal-k8s-calico--apiserver--65b7dbcd78--9rp72-eth0" Feb 8 23:54:23.220249 env[1139]: time="2024-02-08T23:54:23.220183486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:54:23.220421 env[1139]: time="2024-02-08T23:54:23.220226878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:54:23.220421 env[1139]: time="2024-02-08T23:54:23.220240284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:54:23.221340 env[1139]: time="2024-02-08T23:54:23.220566804Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5 pid=4953 runtime=io.containerd.runc.v2 Feb 8 23:54:23.231000 audit[4956]: NETFILTER_CFG table=filter:137 family=2 entries=55 op=nft_register_chain pid=4956 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:54:23.231000 audit[4956]: SYSCALL arch=c000003e syscall=46 success=yes exit=28088 a0=3 a1=7ffdf517c690 a2=0 a3=7ffdf517c67c items=0 ppid=3324 pid=4956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:23.231000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:54:23.317528 env[1139]: time="2024-02-08T23:54:23.317469283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b7dbcd78-9rp72,Uid:762de1bc-9e3a-4078-b97a-2e2dbc3cca87,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5\"" Feb 8 23:54:23.319756 env[1139]: time="2024-02-08T23:54:23.319725427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 8 23:54:24.402661 systemd-networkd[1029]: cali5fb3e187460: Gained IPv6LL Feb 8 23:54:24.501960 systemd[1]: Started sshd@21-172.24.4.64:22-172.24.4.1:39406.service. Feb 8 23:54:24.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.24.4.64:22-172.24.4.1:39406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:26.010000 audit[4990]: USER_ACCT pid=4990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:26.013586 sshd[4990]: Accepted publickey for core from 172.24.4.1 port 39406 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:54:26.018000 audit[4990]: CRED_ACQ pid=4990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:26.019000 audit[4990]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe756933b0 a2=3 a3=0 items=0 ppid=1 pid=4990 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:26.019000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:54:26.023878 sshd[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:54:26.039890 systemd[1]: Started session-22.scope. Feb 8 23:54:26.042408 systemd-logind[1126]: New session 22 of user core. Feb 8 23:54:26.062000 audit[4990]: USER_START pid=4990 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:26.064000 audit[4994]: CRED_ACQ pid=4994 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:27.081743 sshd[4990]: pam_unix(sshd:session): session closed for user core Feb 8 23:54:27.083000 audit[4990]: USER_END pid=4990 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:27.084000 audit[4990]: CRED_DISP pid=4990 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:27.086934 systemd[1]: sshd@21-172.24.4.64:22-172.24.4.1:39406.service: Deactivated successfully. Feb 8 23:54:27.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.24.4.64:22-172.24.4.1:39406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:27.087905 systemd[1]: session-22.scope: Deactivated successfully. Feb 8 23:54:27.089109 systemd-logind[1126]: Session 22 logged out. Waiting for processes to exit. Feb 8 23:54:27.092068 systemd-logind[1126]: Removed session 22. Feb 8 23:54:27.847579 env[1139]: time="2024-02-08T23:54:27.847479570Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:54:27.850435 env[1139]: time="2024-02-08T23:54:27.850376290Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:54:27.854139 env[1139]: time="2024-02-08T23:54:27.854107766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:54:27.860045 env[1139]: time="2024-02-08T23:54:27.859887349Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:54:27.860853 env[1139]: time="2024-02-08T23:54:27.860824919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 8 23:54:27.868901 env[1139]: time="2024-02-08T23:54:27.868866578Z" level=info msg="CreateContainer within sandbox \"10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 8 23:54:27.896550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1985925995.mount: Deactivated successfully. Feb 8 23:54:27.947973 env[1139]: time="2024-02-08T23:54:27.947880172Z" level=info msg="CreateContainer within sandbox \"10a923c33d1929ca1108145b59678cefda4f6e17821bba8d56efebf1ed0afbf5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c2cde612ee43b79392bebf9bc2bd57322d604e779c81b4da163ddfc923b7b0c0\"" Feb 8 23:54:27.951967 env[1139]: time="2024-02-08T23:54:27.951871140Z" level=info msg="StartContainer for \"c2cde612ee43b79392bebf9bc2bd57322d604e779c81b4da163ddfc923b7b0c0\"" Feb 8 23:54:28.308137 env[1139]: time="2024-02-08T23:54:28.308018044Z" level=info msg="StartContainer for \"c2cde612ee43b79392bebf9bc2bd57322d604e779c81b4da163ddfc923b7b0c0\" returns successfully" Feb 8 23:54:28.649428 kernel: kauditd_printk_skb: 16 callbacks suppressed Feb 8 23:54:28.649548 kernel: audit: type=1325 audit(1707436468.644:470): table=filter:138 family=2 entries=8 op=nft_register_rule pid=5073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:28.644000 audit[5073]: NETFILTER_CFG table=filter:138 family=2 entries=8 op=nft_register_rule pid=5073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:28.654766 kernel: audit: type=1300 audit(1707436468.644:470): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffc4048efd0 a2=0 a3=7ffc4048efbc items=0 ppid=2271 pid=5073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:28.644000 audit[5073]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffc4048efd0 a2=0 a3=7ffc4048efbc items=0 ppid=2271 pid=5073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:28.644000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:28.664683 kernel: audit: type=1327 audit(1707436468.644:470): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:28.664746 kernel: audit: type=1325 audit(1707436468.657:471): table=nat:139 family=2 entries=198 op=nft_register_rule pid=5073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:28.657000 audit[5073]: NETFILTER_CFG table=nat:139 family=2 entries=198 op=nft_register_rule pid=5073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:28.670338 kernel: audit: type=1300 audit(1707436468.657:471): arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffc4048efd0 a2=0 a3=7ffc4048efbc items=0 ppid=2271 pid=5073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:28.657000 audit[5073]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffc4048efd0 a2=0 a3=7ffc4048efbc items=0 ppid=2271 pid=5073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:28.673165 kernel: audit: type=1327 audit(1707436468.657:471): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:28.657000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:29.563000 audit[5109]: NETFILTER_CFG table=filter:140 family=2 entries=8 op=nft_register_rule pid=5109 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:29.563000 audit[5109]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fff9e1616f0 a2=0 a3=7fff9e1616dc items=0 ppid=2271 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:29.572871 kernel: audit: type=1325 audit(1707436469.563:472): table=filter:140 family=2 entries=8 op=nft_register_rule pid=5109 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:29.572963 kernel: audit: type=1300 audit(1707436469.563:472): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fff9e1616f0 a2=0 a3=7fff9e1616dc items=0 ppid=2271 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:29.572991 kernel: audit: type=1327 audit(1707436469.563:472): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:29.563000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:29.566000 audit[5109]: NETFILTER_CFG table=nat:141 family=2 entries=198 op=nft_register_rule pid=5109 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:29.578772 kernel: audit: type=1325 audit(1707436469.566:473): table=nat:141 family=2 entries=198 op=nft_register_rule pid=5109 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:54:29.566000 audit[5109]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7fff9e1616f0 a2=0 a3=7fff9e1616dc items=0 ppid=2271 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:29.566000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:54:32.083976 systemd[1]: Started sshd@22-172.24.4.64:22-172.24.4.1:45566.service. Feb 8 23:54:32.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.24.4.64:22-172.24.4.1:45566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:33.494000 audit[5110]: USER_ACCT pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:33.496659 sshd[5110]: Accepted publickey for core from 172.24.4.1 port 45566 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:54:33.498000 audit[5110]: CRED_ACQ pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:33.498000 audit[5110]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd7c1242f0 a2=3 a3=0 items=0 ppid=1 pid=5110 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:33.498000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:54:33.503265 sshd[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:54:33.514489 systemd-logind[1126]: New session 23 of user core. Feb 8 23:54:33.515737 systemd[1]: Started session-23.scope. Feb 8 23:54:33.531000 audit[5110]: USER_START pid=5110 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:33.534000 audit[5115]: CRED_ACQ pid=5115 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:34.422797 sshd[5110]: pam_unix(sshd:session): session closed for user core Feb 8 23:54:34.424000 audit[5110]: USER_END pid=5110 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:34.429222 kernel: kauditd_printk_skb: 10 callbacks suppressed Feb 8 23:54:34.429364 kernel: audit: type=1106 audit(1707436474.424:480): pid=5110 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:34.441033 systemd[1]: sshd@22-172.24.4.64:22-172.24.4.1:45566.service: Deactivated successfully. Feb 8 23:54:34.442757 systemd[1]: session-23.scope: Deactivated successfully. Feb 8 23:54:34.424000 audit[5110]: CRED_DISP pid=5110 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:34.444383 systemd-logind[1126]: Session 23 logged out. Waiting for processes to exit. Feb 8 23:54:34.454469 kernel: audit: type=1104 audit(1707436474.424:481): pid=5110 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:34.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.24.4.64:22-172.24.4.1:45566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:34.456040 systemd-logind[1126]: Removed session 23. Feb 8 23:54:34.465385 kernel: audit: type=1131 audit(1707436474.440:482): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.24.4.64:22-172.24.4.1:45566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:36.441480 systemd[1]: run-containerd-runc-k8s.io-f706f26dbf540a674496c77c24f51df5b923036da2f6384b022f61e150525f5a-runc.Mxt0Kx.mount: Deactivated successfully. Feb 8 23:54:39.427232 systemd[1]: Started sshd@23-172.24.4.64:22-172.24.4.1:57168.service. Feb 8 23:54:39.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.24.4.64:22-172.24.4.1:57168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:39.438918 kernel: audit: type=1130 audit(1707436479.426:483): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.24.4.64:22-172.24.4.1:57168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:40.725394 sshd[5149]: Accepted publickey for core from 172.24.4.1 port 57168 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:54:40.723000 audit[5149]: USER_ACCT pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:40.737349 kernel: audit: type=1101 audit(1707436480.723:484): pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:40.748373 kernel: audit: type=1103 audit(1707436480.736:485): pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:40.736000 audit[5149]: CRED_ACQ pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:40.738226 sshd[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:54:40.755349 kernel: audit: type=1006 audit(1707436480.736:486): pid=5149 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Feb 8 23:54:40.769430 kernel: audit: type=1300 audit(1707436480.736:486): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff7f6216c0 a2=3 a3=0 items=0 ppid=1 pid=5149 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:40.736000 audit[5149]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff7f6216c0 a2=3 a3=0 items=0 ppid=1 pid=5149 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:40.765370 systemd[1]: Started session-24.scope. Feb 8 23:54:40.766697 systemd-logind[1126]: New session 24 of user core. Feb 8 23:54:40.736000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:54:40.775365 kernel: audit: type=1327 audit(1707436480.736:486): proctitle=737368643A20636F7265205B707269765D Feb 8 23:54:40.782000 audit[5149]: USER_START pid=5149 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:40.796402 kernel: audit: type=1105 audit(1707436480.782:487): pid=5149 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:40.797000 audit[5152]: CRED_ACQ pid=5152 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:40.808382 kernel: audit: type=1103 audit(1707436480.797:488): pid=5152 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:41.617894 sshd[5149]: pam_unix(sshd:session): session closed for user core Feb 8 23:54:41.619000 audit[5149]: USER_END pid=5149 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:41.633354 kernel: audit: type=1106 audit(1707436481.619:489): pid=5149 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:41.633408 systemd[1]: sshd@23-172.24.4.64:22-172.24.4.1:57168.service: Deactivated successfully. Feb 8 23:54:41.619000 audit[5149]: CRED_DISP pid=5149 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:41.636454 systemd[1]: session-24.scope: Deactivated successfully. Feb 8 23:54:41.638495 systemd-logind[1126]: Session 24 logged out. Waiting for processes to exit. Feb 8 23:54:41.641043 systemd-logind[1126]: Removed session 24. Feb 8 23:54:41.646365 kernel: audit: type=1104 audit(1707436481.619:490): pid=5149 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:41.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.24.4.64:22-172.24.4.1:57168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:46.623944 systemd[1]: Started sshd@24-172.24.4.64:22-172.24.4.1:53242.service. Feb 8 23:54:46.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.24.4.64:22-172.24.4.1:53242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:46.626700 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:54:46.626753 kernel: audit: type=1130 audit(1707436486.622:492): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.24.4.64:22-172.24.4.1:53242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:47.991000 audit[5185]: USER_ACCT pid=5185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:47.993171 sshd[5185]: Accepted publickey for core from 172.24.4.1 port 53242 ssh2: RSA SHA256:HSrdtHi11BFyFOe7/hV/qbBfBUVhiuv35z5JBPEU2gw Feb 8 23:54:47.993000 audit[5185]: CRED_ACQ pid=5185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:47.998480 sshd[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:54:48.001912 kernel: audit: type=1101 audit(1707436487.991:493): pid=5185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:48.001984 kernel: audit: type=1103 audit(1707436487.993:494): pid=5185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:47.993000 audit[5185]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffedfcc4f40 a2=3 a3=0 items=0 ppid=1 pid=5185 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:48.011864 kernel: audit: type=1006 audit(1707436487.993:495): pid=5185 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Feb 8 23:54:48.011973 kernel: audit: type=1300 audit(1707436487.993:495): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffedfcc4f40 a2=3 a3=0 items=0 ppid=1 pid=5185 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:47.993000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:54:48.016821 kernel: audit: type=1327 audit(1707436487.993:495): proctitle=737368643A20636F7265205B707269765D Feb 8 23:54:48.016055 systemd-logind[1126]: New session 25 of user core. Feb 8 23:54:48.016421 systemd[1]: Started session-25.scope. Feb 8 23:54:48.022000 audit[5185]: USER_START pid=5185 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:48.029367 kernel: audit: type=1105 audit(1707436488.022:496): pid=5185 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:48.029472 kernel: audit: type=1103 audit(1707436488.024:497): pid=5208 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:48.024000 audit[5208]: CRED_ACQ pid=5208 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:48.761421 sshd[5185]: pam_unix(sshd:session): session closed for user core Feb 8 23:54:48.761000 audit[5185]: USER_END pid=5185 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:48.764182 systemd-logind[1126]: Session 25 logged out. Waiting for processes to exit. Feb 8 23:54:48.765557 systemd[1]: sshd@24-172.24.4.64:22-172.24.4.1:53242.service: Deactivated successfully. Feb 8 23:54:48.766312 systemd[1]: session-25.scope: Deactivated successfully. Feb 8 23:54:48.767817 systemd-logind[1126]: Removed session 25. Feb 8 23:54:48.772339 kernel: audit: type=1106 audit(1707436488.761:498): pid=5185 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:48.761000 audit[5185]: CRED_DISP pid=5185 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:48.780342 kernel: audit: type=1104 audit(1707436488.761:499): pid=5185 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 8 23:54:48.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.24.4.64:22-172.24.4.1:53242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:53.065117 systemd[1]: run-containerd-runc-k8s.io-c2cde612ee43b79392bebf9bc2bd57322d604e779c81b4da163ddfc923b7b0c0-runc.kEBv2W.mount: Deactivated successfully.