Feb 8 23:34:59.011644 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:34:59.011677 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:34:59.011692 kernel: BIOS-provided physical RAM map: Feb 8 23:34:59.011702 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 8 23:34:59.011712 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 8 23:34:59.011723 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 8 23:34:59.011738 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 8 23:34:59.011749 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 8 23:34:59.011760 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 8 23:34:59.011771 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 8 23:34:59.011781 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 8 23:34:59.011792 kernel: printk: bootconsole [earlyser0] enabled Feb 8 23:34:59.011803 kernel: NX (Execute Disable) protection: active Feb 8 23:34:59.011814 kernel: efi: EFI v2.70 by Microsoft Feb 8 23:34:59.011830 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 8 23:34:59.011843 kernel: random: crng init done Feb 8 23:34:59.011854 kernel: SMBIOS 3.1.0 present. Feb 8 23:34:59.011865 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 8 23:34:59.011877 kernel: Hypervisor detected: Microsoft Hyper-V Feb 8 23:34:59.011888 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 8 23:34:59.011900 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 8 23:34:59.011911 kernel: Hyper-V: Nested features: 0x1e0101 Feb 8 23:34:59.011925 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 8 23:34:59.011936 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 8 23:34:59.011948 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 8 23:34:59.011960 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 8 23:34:59.011972 kernel: tsc: Detected 2593.907 MHz processor Feb 8 23:34:59.011984 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:34:59.011996 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:34:59.012008 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 8 23:34:59.012019 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:34:59.012031 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 8 23:34:59.012045 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 8 23:34:59.012057 kernel: Using GB pages for direct mapping Feb 8 23:34:59.012068 kernel: Secure boot disabled Feb 8 23:34:59.012080 kernel: ACPI: Early table checksum verification disabled Feb 8 23:34:59.012092 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 8 23:34:59.012104 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:59.012116 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:59.012128 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 8 23:34:59.012148 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 8 23:34:59.012161 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:59.012174 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:59.012187 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:59.012200 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:59.012213 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:59.012228 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:59.012241 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:59.012254 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 8 23:34:59.012267 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 8 23:34:59.012280 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 8 23:34:59.012293 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 8 23:34:59.012306 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 8 23:34:59.012319 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 8 23:34:59.012334 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 8 23:34:59.012347 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 8 23:34:59.012360 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 8 23:34:59.012383 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 8 23:34:59.012396 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 8 23:34:59.012409 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 8 23:34:59.012422 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 8 23:34:59.012434 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 8 23:34:59.012447 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 8 23:34:59.012463 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 8 23:34:59.012476 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 8 23:34:59.012489 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 8 23:34:59.012501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 8 23:34:59.012514 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 8 23:34:59.012527 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 8 23:34:59.012540 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 8 23:34:59.012553 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 8 23:34:59.012566 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 8 23:34:59.012581 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 8 23:34:59.012594 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 8 23:34:59.012607 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 8 23:34:59.012620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 8 23:34:59.012633 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 8 23:34:59.012646 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 8 23:34:59.012659 kernel: Zone ranges: Feb 8 23:34:59.012672 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:34:59.012685 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 8 23:34:59.012700 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:34:59.012713 kernel: Movable zone start for each node Feb 8 23:34:59.012725 kernel: Early memory node ranges Feb 8 23:34:59.012738 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 8 23:34:59.012750 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 8 23:34:59.012763 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 8 23:34:59.012776 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:34:59.012789 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 8 23:34:59.012802 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:34:59.012817 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 8 23:34:59.012830 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 8 23:34:59.012843 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 8 23:34:59.012856 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 8 23:34:59.012869 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:34:59.012882 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:34:59.012895 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:34:59.012908 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 8 23:34:59.012920 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 8 23:34:59.012936 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 8 23:34:59.012949 kernel: Booting paravirtualized kernel on Hyper-V Feb 8 23:34:59.012962 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:34:59.012975 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 8 23:34:59.012989 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 8 23:34:59.013002 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 8 23:34:59.013015 kernel: pcpu-alloc: [0] 0 1 Feb 8 23:34:59.013027 kernel: Hyper-V: PV spinlocks enabled Feb 8 23:34:59.013040 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 8 23:34:59.013055 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 8 23:34:59.013069 kernel: Policy zone: Normal Feb 8 23:34:59.013083 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:34:59.013096 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:34:59.013109 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 8 23:34:59.013122 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 8 23:34:59.013135 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:34:59.013148 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 8 23:34:59.013164 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 8 23:34:59.013177 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:34:59.013200 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:34:59.013216 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:34:59.013231 kernel: rcu: RCU event tracing is enabled. Feb 8 23:34:59.013244 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 8 23:34:59.013258 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:34:59.013272 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:34:59.013290 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:34:59.013304 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 8 23:34:59.013317 kernel: Using NULL legacy PIC Feb 8 23:34:59.013338 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 8 23:34:59.013351 kernel: Console: colour dummy device 80x25 Feb 8 23:34:59.013365 kernel: printk: console [tty1] enabled Feb 8 23:34:59.013387 kernel: printk: console [ttyS0] enabled Feb 8 23:34:59.013401 kernel: printk: bootconsole [earlyser0] disabled Feb 8 23:34:59.013417 kernel: ACPI: Core revision 20210730 Feb 8 23:34:59.013431 kernel: Failed to register legacy timer interrupt Feb 8 23:34:59.013444 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:34:59.013458 kernel: Hyper-V: Using IPI hypercalls Feb 8 23:34:59.013472 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Feb 8 23:34:59.013486 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 8 23:34:59.013500 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 8 23:34:59.013514 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:34:59.013527 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:34:59.013540 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:34:59.013556 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:34:59.013570 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 8 23:34:59.013584 kernel: RETBleed: Vulnerable Feb 8 23:34:59.013597 kernel: Speculative Store Bypass: Vulnerable Feb 8 23:34:59.013611 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:34:59.013625 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:34:59.013638 kernel: GDS: Unknown: Dependent on hypervisor status Feb 8 23:34:59.013651 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 8 23:34:59.013665 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 8 23:34:59.013679 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 8 23:34:59.013695 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 8 23:34:59.013709 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 8 23:34:59.013722 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 8 23:34:59.013740 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 8 23:34:59.013753 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 8 23:34:59.013767 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 8 23:34:59.013780 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 8 23:34:59.013794 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 8 23:34:59.013807 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:34:59.013821 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:34:59.013834 kernel: LSM: Security Framework initializing Feb 8 23:34:59.013848 kernel: SELinux: Initializing. Feb 8 23:34:59.013864 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:34:59.013877 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:34:59.013891 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 8 23:34:59.013905 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 8 23:34:59.013919 kernel: signal: max sigframe size: 3632 Feb 8 23:34:59.013933 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:34:59.013947 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 8 23:34:59.013961 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:34:59.013974 kernel: x86: Booting SMP configuration: Feb 8 23:34:59.013988 kernel: .... node #0, CPUs: #1 Feb 8 23:34:59.014004 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 8 23:34:59.014019 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 8 23:34:59.014032 kernel: smp: Brought up 1 node, 2 CPUs Feb 8 23:34:59.014046 kernel: smpboot: Max logical packages: 1 Feb 8 23:34:59.014060 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 8 23:34:59.014073 kernel: devtmpfs: initialized Feb 8 23:34:59.014087 kernel: x86/mm: Memory block size: 128MB Feb 8 23:34:59.014101 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 8 23:34:59.014117 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:34:59.014131 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 8 23:34:59.014145 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:34:59.014159 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:34:59.014172 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:34:59.014186 kernel: audit: type=2000 audit(1707435297.023:1): state=initialized audit_enabled=0 res=1 Feb 8 23:34:59.014200 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:34:59.014214 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:34:59.014227 kernel: cpuidle: using governor menu Feb 8 23:34:59.014244 kernel: ACPI: bus type PCI registered Feb 8 23:34:59.014257 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:34:59.014271 kernel: dca service started, version 1.12.1 Feb 8 23:34:59.014285 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:34:59.014299 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 8 23:34:59.014312 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:34:59.014326 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:34:59.014340 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:34:59.014354 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:34:59.014378 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:34:59.014398 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:34:59.014409 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:34:59.014422 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:34:59.014434 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:34:59.014446 kernel: ACPI: Interpreter enabled Feb 8 23:34:59.014459 kernel: ACPI: PM: (supports S0 S5) Feb 8 23:34:59.014472 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:34:59.014485 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:34:59.014502 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 8 23:34:59.014515 kernel: iommu: Default domain type: Translated Feb 8 23:34:59.014528 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:34:59.014541 kernel: vgaarb: loaded Feb 8 23:34:59.014554 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:34:59.014568 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Feb 8 23:34:59.014581 kernel: PTP clock support registered Feb 8 23:34:59.014594 kernel: Registered efivars operations Feb 8 23:34:59.014607 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:34:59.014621 kernel: PCI: System does not support PCI Feb 8 23:34:59.014637 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 8 23:34:59.014650 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:34:59.014662 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:34:59.014676 kernel: pnp: PnP ACPI init Feb 8 23:34:59.014689 kernel: pnp: PnP ACPI: found 3 devices Feb 8 23:34:59.014701 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:34:59.014715 kernel: NET: Registered PF_INET protocol family Feb 8 23:34:59.014729 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 8 23:34:59.014745 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 8 23:34:59.014759 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:34:59.014772 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 8 23:34:59.014786 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 8 23:34:59.014800 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 8 23:34:59.014812 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:34:59.014825 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:34:59.014839 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:34:59.014852 kernel: NET: Registered PF_XDP protocol family Feb 8 23:34:59.014867 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:34:59.014879 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 8 23:34:59.014892 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 8 23:34:59.014905 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 8 23:34:59.014919 kernel: Initialise system trusted keyrings Feb 8 23:34:59.014931 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 8 23:34:59.014944 kernel: Key type asymmetric registered Feb 8 23:34:59.014957 kernel: Asymmetric key parser 'x509' registered Feb 8 23:34:59.014970 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:34:59.014987 kernel: io scheduler mq-deadline registered Feb 8 23:34:59.015001 kernel: io scheduler kyber registered Feb 8 23:34:59.015016 kernel: io scheduler bfq registered Feb 8 23:34:59.015029 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:34:59.015043 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:34:59.015055 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:34:59.015068 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 8 23:34:59.015080 kernel: i8042: PNP: No PS/2 controller found. Feb 8 23:34:59.015236 kernel: rtc_cmos 00:02: registered as rtc0 Feb 8 23:34:59.015349 kernel: rtc_cmos 00:02: setting system clock to 2024-02-08T23:34:58 UTC (1707435298) Feb 8 23:34:59.015470 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 8 23:34:59.015489 kernel: fail to initialize ptp_kvm Feb 8 23:34:59.015504 kernel: intel_pstate: CPU model not supported Feb 8 23:34:59.015518 kernel: efifb: probing for efifb Feb 8 23:34:59.015532 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 8 23:34:59.015546 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 8 23:34:59.015559 kernel: efifb: scrolling: redraw Feb 8 23:34:59.015577 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 8 23:34:59.015590 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:34:59.015603 kernel: fb0: EFI VGA frame buffer device Feb 8 23:34:59.015616 kernel: pstore: Registered efi as persistent store backend Feb 8 23:34:59.015630 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:34:59.015643 kernel: Segment Routing with IPv6 Feb 8 23:34:59.015655 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:34:59.015667 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:34:59.015680 kernel: Key type dns_resolver registered Feb 8 23:34:59.015699 kernel: IPI shorthand broadcast: enabled Feb 8 23:34:59.015716 kernel: sched_clock: Marking stable (686807500, 20399900)->(878919400, -171712000) Feb 8 23:34:59.015738 kernel: registered taskstats version 1 Feb 8 23:34:59.015757 kernel: Loading compiled-in X.509 certificates Feb 8 23:34:59.015772 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:34:59.015785 kernel: Key type .fscrypt registered Feb 8 23:34:59.015798 kernel: Key type fscrypt-provisioning registered Feb 8 23:34:59.015810 kernel: pstore: Using crash dump compression: deflate Feb 8 23:34:59.015826 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:34:59.015839 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:34:59.015852 kernel: ima: No architecture policies found Feb 8 23:34:59.015866 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:34:59.015879 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:34:59.015893 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:34:59.015907 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:34:59.015919 kernel: Run /init as init process Feb 8 23:34:59.015932 kernel: with arguments: Feb 8 23:34:59.015945 kernel: /init Feb 8 23:34:59.015961 kernel: with environment: Feb 8 23:34:59.015974 kernel: HOME=/ Feb 8 23:34:59.015987 kernel: TERM=linux Feb 8 23:34:59.015998 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:34:59.016013 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:34:59.016030 systemd[1]: Detected virtualization microsoft. Feb 8 23:34:59.016054 systemd[1]: Detected architecture x86-64. Feb 8 23:34:59.016071 systemd[1]: Running in initrd. Feb 8 23:34:59.016084 systemd[1]: No hostname configured, using default hostname. Feb 8 23:34:59.016098 systemd[1]: Hostname set to <localhost>. Feb 8 23:34:59.016113 systemd[1]: Initializing machine ID from random generator. Feb 8 23:34:59.016128 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:34:59.016143 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:34:59.016157 systemd[1]: Reached target cryptsetup.target. Feb 8 23:34:59.016171 systemd[1]: Reached target paths.target. Feb 8 23:34:59.016185 systemd[1]: Reached target slices.target. Feb 8 23:34:59.016201 systemd[1]: Reached target swap.target. Feb 8 23:34:59.016215 systemd[1]: Reached target timers.target. Feb 8 23:34:59.016229 systemd[1]: Listening on iscsid.socket. Feb 8 23:34:59.016243 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:34:59.016257 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:34:59.016276 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:34:59.016291 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:34:59.016307 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:34:59.016321 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:34:59.016336 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:34:59.016354 systemd[1]: Reached target sockets.target. Feb 8 23:34:59.016387 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:34:59.016400 systemd[1]: Finished network-cleanup.service. Feb 8 23:34:59.016413 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:34:59.016426 systemd[1]: Starting systemd-journald.service... Feb 8 23:34:59.016439 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:34:59.016455 systemd[1]: Starting systemd-resolved.service... Feb 8 23:34:59.016469 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:34:59.016482 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:34:59.016500 systemd-journald[183]: Journal started Feb 8 23:34:59.016566 systemd-journald[183]: Runtime Journal (/run/log/journal/2fe982f7a95f44fa818f5fb416d18855) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:34:59.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.037385 kernel: audit: type=1130 audit(1707435299.025:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.037405 systemd[1]: Started systemd-journald.service. Feb 8 23:34:59.040693 systemd-modules-load[184]: Inserted module 'overlay' Feb 8 23:34:59.045741 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:34:59.047840 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:34:59.051036 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:34:59.053584 systemd-resolved[185]: Positive Trust Anchors: Feb 8 23:34:59.053595 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:34:59.053628 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:34:59.056298 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 8 23:34:59.086638 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:34:59.084735 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:34:59.089420 kernel: Bridge firewalling registered Feb 8 23:34:59.091059 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 8 23:34:59.093511 systemd[1]: Started systemd-resolved.service. Feb 8 23:34:59.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.107919 systemd[1]: Reached target nss-lookup.target. Feb 8 23:34:59.140648 kernel: audit: type=1130 audit(1707435299.045:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.140678 kernel: audit: type=1130 audit(1707435299.047:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.140701 kernel: audit: type=1130 audit(1707435299.049:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.140718 kernel: audit: type=1130 audit(1707435299.095:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.147431 kernel: SCSI subsystem initialized Feb 8 23:34:59.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.156721 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:34:59.184253 kernel: audit: type=1130 audit(1707435299.158:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.184297 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:34:59.184317 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:34:59.184341 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:34:59.184621 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:34:59.189040 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 8 23:34:59.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.189720 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:34:59.214322 kernel: audit: type=1130 audit(1707435299.188:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.214356 kernel: audit: type=1130 audit(1707435299.201:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.204435 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:34:59.220624 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:34:59.231238 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:34:59.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.244386 kernel: audit: type=1130 audit(1707435299.233:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.244469 dracut-cmdline[203]: dracut-dracut-053 Feb 8 23:34:59.247954 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:34:59.308393 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:34:59.321394 kernel: iscsi: registered transport (tcp) Feb 8 23:34:59.346135 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:34:59.346196 kernel: QLogic iSCSI HBA Driver Feb 8 23:34:59.374957 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:34:59.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.378104 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:34:59.429395 kernel: raid6: avx512x4 gen() 18251 MB/s Feb 8 23:34:59.449384 kernel: raid6: avx512x4 xor() 8335 MB/s Feb 8 23:34:59.468392 kernel: raid6: avx512x2 gen() 18082 MB/s Feb 8 23:34:59.489388 kernel: raid6: avx512x2 xor() 29624 MB/s Feb 8 23:34:59.509381 kernel: raid6: avx512x1 gen() 18247 MB/s Feb 8 23:34:59.529380 kernel: raid6: avx512x1 xor() 27037 MB/s Feb 8 23:34:59.549383 kernel: raid6: avx2x4 gen() 18209 MB/s Feb 8 23:34:59.569380 kernel: raid6: avx2x4 xor() 7998 MB/s Feb 8 23:34:59.589379 kernel: raid6: avx2x2 gen() 18202 MB/s Feb 8 23:34:59.610383 kernel: raid6: avx2x2 xor() 22105 MB/s Feb 8 23:34:59.630379 kernel: raid6: avx2x1 gen() 14094 MB/s Feb 8 23:34:59.650379 kernel: raid6: avx2x1 xor() 19287 MB/s Feb 8 23:34:59.671380 kernel: raid6: sse2x4 gen() 11651 MB/s Feb 8 23:34:59.691379 kernel: raid6: sse2x4 xor() 7098 MB/s Feb 8 23:34:59.711378 kernel: raid6: sse2x2 gen() 12919 MB/s Feb 8 23:34:59.732381 kernel: raid6: sse2x2 xor() 7539 MB/s Feb 8 23:34:59.752379 kernel: raid6: sse2x1 gen() 11563 MB/s Feb 8 23:34:59.775385 kernel: raid6: sse2x1 xor() 5857 MB/s Feb 8 23:34:59.775401 kernel: raid6: using algorithm avx512x4 gen() 18251 MB/s Feb 8 23:34:59.775414 kernel: raid6: .... xor() 8335 MB/s, rmw enabled Feb 8 23:34:59.778649 kernel: raid6: using avx512x2 recovery algorithm Feb 8 23:34:59.798391 kernel: xor: automatically using best checksumming function avx Feb 8 23:34:59.893397 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:34:59.901342 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:34:59.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.903000 audit: BPF prog-id=7 op=LOAD Feb 8 23:34:59.904000 audit: BPF prog-id=8 op=LOAD Feb 8 23:34:59.905537 systemd[1]: Starting systemd-udevd.service... Feb 8 23:34:59.920348 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 8 23:34:59.926783 systemd[1]: Started systemd-udevd.service. Feb 8 23:34:59.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.932559 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:34:59.948024 dracut-pre-trigger[389]: rd.md=0: removing MD RAID activation Feb 8 23:34:59.975937 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:34:59.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.981623 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:35:00.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:00.015537 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:35:00.060388 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:35:00.091065 kernel: AVX2 version of gcm_enc/dec engaged. Feb 8 23:35:00.091121 kernel: AES CTR mode by8 optimization enabled Feb 8 23:35:00.096388 kernel: hv_vmbus: Vmbus version:5.2 Feb 8 23:35:00.112449 kernel: hv_vmbus: registering driver hv_netvsc Feb 8 23:35:00.127910 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 8 23:35:00.127961 kernel: hv_vmbus: registering driver hv_storvsc Feb 8 23:35:00.128386 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 8 23:35:00.143689 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 8 23:35:00.143722 kernel: scsi host1: storvsc_host_t Feb 8 23:35:00.143755 kernel: scsi host0: storvsc_host_t Feb 8 23:35:00.156230 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 8 23:35:00.156291 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 8 23:35:00.190394 kernel: hv_vmbus: registering driver hid_hyperv Feb 8 23:35:00.190437 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 8 23:35:00.190449 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 8 23:35:00.190624 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 8 23:35:00.190731 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 8 23:35:00.208174 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 8 23:35:00.208411 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 8 23:35:00.208585 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 8 23:35:00.216552 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 8 23:35:00.216719 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 8 23:35:00.216821 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 8 23:35:00.222390 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:35:00.226383 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 8 23:35:00.317270 kernel: hv_netvsc 0022489d-1f12-0022-489d-1f120022489d eth0: VF slot 1 added Feb 8 23:35:00.326390 kernel: hv_vmbus: registering driver hv_pci Feb 8 23:35:00.332387 kernel: hv_pci 6f98f6cc-09aa-4533-8247-7738e5b382b2: PCI VMBus probing: Using version 0x10004 Feb 8 23:35:00.342554 kernel: hv_pci 6f98f6cc-09aa-4533-8247-7738e5b382b2: PCI host bridge to bus 09aa:00 Feb 8 23:35:00.342718 kernel: pci_bus 09aa:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 8 23:35:00.342859 kernel: pci_bus 09aa:00: No busn resource found for root bus, will use [bus 00-ff] Feb 8 23:35:00.351837 kernel: pci 09aa:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 8 23:35:00.359384 kernel: pci 09aa:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:35:00.377605 kernel: pci 09aa:00:02.0: enabling Extended Tags Feb 8 23:35:00.391450 kernel: pci 09aa:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 09aa:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 8 23:35:00.400072 kernel: pci_bus 09aa:00: busn_res: [bus 00-ff] end is updated to 00 Feb 8 23:35:00.400249 kernel: pci 09aa:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:35:00.491396 kernel: mlx5_core 09aa:00:02.0: firmware version: 14.30.1350 Feb 8 23:35:00.622504 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:35:00.649390 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (437) Feb 8 23:35:00.656388 kernel: mlx5_core 09aa:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 8 23:35:00.664510 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:35:00.804594 kernel: mlx5_core 09aa:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 8 23:35:00.804818 kernel: mlx5_core 09aa:00:02.0: mlx5e_tc_post_act_init:40:(pid 190): firmware level support is missing Feb 8 23:35:00.816031 kernel: hv_netvsc 0022489d-1f12-0022-489d-1f120022489d eth0: VF registering: eth1 Feb 8 23:35:00.816200 kernel: mlx5_core 09aa:00:02.0 eth1: joined to eth0 Feb 8 23:35:00.828394 kernel: mlx5_core 09aa:00:02.0 enP2474s1: renamed from eth1 Feb 8 23:35:00.835048 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:35:00.840605 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:35:00.849244 systemd[1]: Starting disk-uuid.service... Feb 8 23:35:00.885484 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:35:01.869394 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:35:01.870339 disk-uuid[545]: The operation has completed successfully. Feb 8 23:35:01.949345 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:35:01.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:01.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:01.949481 systemd[1]: Finished disk-uuid.service. Feb 8 23:35:01.952727 systemd[1]: Starting verity-setup.service... Feb 8 23:35:01.991389 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 8 23:35:02.249172 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:35:02.254123 systemd[1]: Finished verity-setup.service. Feb 8 23:35:02.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:02.259239 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:35:02.331405 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:35:02.331005 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:35:02.334695 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:35:02.338530 systemd[1]: Starting ignition-setup.service... Feb 8 23:35:02.341154 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:35:02.362294 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:35:02.362346 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:35:02.362364 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:35:02.406299 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:35:02.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:02.411000 audit: BPF prog-id=9 op=LOAD Feb 8 23:35:02.412527 systemd[1]: Starting systemd-networkd.service... Feb 8 23:35:02.436878 systemd-networkd[815]: lo: Link UP Feb 8 23:35:02.436889 systemd-networkd[815]: lo: Gained carrier Feb 8 23:35:02.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:02.437403 systemd-networkd[815]: Enumeration completed Feb 8 23:35:02.437470 systemd[1]: Started systemd-networkd.service. Feb 8 23:35:02.441423 systemd[1]: Reached target network.target. Feb 8 23:35:02.445059 systemd-networkd[815]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:35:02.447830 systemd[1]: Starting iscsiuio.service... Feb 8 23:35:02.462791 systemd[1]: Started iscsiuio.service. Feb 8 23:35:02.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:02.467865 systemd[1]: Starting iscsid.service... Feb 8 23:35:02.471896 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:35:02.475499 iscsid[827]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:35:02.475499 iscsid[827]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier]. Feb 8 23:35:02.475499 iscsid[827]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:35:02.475499 iscsid[827]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:35:02.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:02.498679 iscsid[827]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:35:02.498679 iscsid[827]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:35:02.517395 kernel: mlx5_core 09aa:00:02.0 enP2474s1: Link up Feb 8 23:35:02.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:02.480510 systemd[1]: Started iscsid.service. Feb 8 23:35:02.495177 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:35:02.515058 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:35:02.517479 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:35:02.519337 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:35:02.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:02.521262 systemd[1]: Reached target remote-fs.target. Feb 8 23:35:02.524657 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:35:02.534569 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:35:02.593343 kernel: hv_netvsc 0022489d-1f12-0022-489d-1f120022489d eth0: Data path switched to VF: enP2474s1 Feb 8 23:35:02.593581 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:35:02.594261 systemd-networkd[815]: enP2474s1: Link UP Feb 8 23:35:02.594499 systemd-networkd[815]: eth0: Link UP Feb 8 23:35:02.594936 systemd-networkd[815]: eth0: Gained carrier Feb 8 23:35:02.602539 systemd-networkd[815]: enP2474s1: Gained carrier Feb 8 23:35:02.628447 systemd-networkd[815]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:35:02.679532 systemd[1]: Finished ignition-setup.service. Feb 8 23:35:02.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:02.682681 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:35:04.645576 systemd-networkd[815]: eth0: Gained IPv6LL Feb 8 23:35:06.064009 ignition[843]: Ignition 2.14.0 Feb 8 23:35:06.064025 ignition[843]: Stage: fetch-offline Feb 8 23:35:06.064112 ignition[843]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:35:06.064169 ignition[843]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:35:06.136152 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:35:06.136397 ignition[843]: parsed url from cmdline: "" Feb 8 23:35:06.137642 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:35:06.136404 ignition[843]: no config URL provided Feb 8 23:35:06.157880 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 8 23:35:06.157916 kernel: audit: type=1130 audit(1707435306.145:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:06.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:06.146531 systemd[1]: Starting ignition-fetch.service... Feb 8 23:35:06.136410 ignition[843]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:35:06.136419 ignition[843]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:35:06.136425 ignition[843]: failed to fetch config: resource requires networking Feb 8 23:35:06.136542 ignition[843]: Ignition finished successfully Feb 8 23:35:06.155014 ignition[849]: Ignition 2.14.0 Feb 8 23:35:06.155020 ignition[849]: Stage: fetch Feb 8 23:35:06.155121 ignition[849]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:35:06.155144 ignition[849]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:35:06.162562 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:35:06.162720 ignition[849]: parsed url from cmdline: "" Feb 8 23:35:06.162728 ignition[849]: no config URL provided Feb 8 23:35:06.162733 ignition[849]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:35:06.162741 ignition[849]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:35:06.162776 ignition[849]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 8 23:35:06.278537 ignition[849]: GET result: OK Feb 8 23:35:06.278564 ignition[849]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Feb 8 23:35:06.402577 ignition[849]: opening config device: "/dev/sr0" Feb 8 23:35:06.403014 ignition[849]: getting drive status for "/dev/sr0" Feb 8 23:35:06.403060 ignition[849]: drive status: OK Feb 8 23:35:06.403097 ignition[849]: mounting config device Feb 8 23:35:06.403127 ignition[849]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure4263440561" Feb 8 23:35:06.428281 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2024/02/09 00:00 (1000) Feb 8 23:35:06.427440 ignition[849]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure4263440561" Feb 8 23:35:06.427447 ignition[849]: checking for config drive Feb 8 23:35:06.427826 ignition[849]: reading config Feb 8 23:35:06.428244 ignition[849]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure4263440561" Feb 8 23:35:06.428335 ignition[849]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure4263440561" Feb 8 23:35:06.428361 ignition[849]: config has been read from custom data Feb 8 23:35:06.428442 ignition[849]: parsing config with SHA512: f9e166f61a13ec66066aa9984142f091b3702db38b226536c69db6c2f788e209a48df94175edc2d3b6e1c80f248f6a8125682efc643d61d192a1b965e0e1f0f8 Feb 8 23:35:06.439844 systemd[1]: tmp-ignition\x2dazure4263440561.mount: Deactivated successfully. Feb 8 23:35:06.466626 unknown[849]: fetched base config from "system" Feb 8 23:35:06.466647 unknown[849]: fetched base config from "system" Feb 8 23:35:06.466662 unknown[849]: fetched user config from "azure" Feb 8 23:35:06.471455 ignition[849]: fetch: fetch complete Feb 8 23:35:06.471460 ignition[849]: fetch: fetch passed Feb 8 23:35:06.471500 ignition[849]: Ignition finished successfully Feb 8 23:35:06.475726 systemd[1]: Finished ignition-fetch.service. Feb 8 23:35:06.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:06.482334 systemd[1]: Starting ignition-kargs.service... Feb 8 23:35:06.496534 kernel: audit: type=1130 audit(1707435306.481:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:06.502327 ignition[857]: Ignition 2.14.0 Feb 8 23:35:06.502336 ignition[857]: Stage: kargs Feb 8 23:35:06.502486 ignition[857]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:35:06.502522 ignition[857]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:35:06.507395 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:35:06.511805 ignition[857]: kargs: kargs passed Feb 8 23:35:06.511857 ignition[857]: Ignition finished successfully Feb 8 23:35:06.515482 systemd[1]: Finished ignition-kargs.service. Feb 8 23:35:06.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:06.528384 kernel: audit: type=1130 audit(1707435306.517:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:06.529190 systemd[1]: Starting ignition-disks.service... Feb 8 23:35:06.539604 ignition[863]: Ignition 2.14.0 Feb 8 23:35:06.539614 ignition[863]: Stage: disks Feb 8 23:35:06.539742 ignition[863]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:35:06.539778 ignition[863]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:35:06.549115 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:35:06.552746 ignition[863]: disks: disks passed Feb 8 23:35:06.552800 ignition[863]: Ignition finished successfully Feb 8 23:35:06.556506 systemd[1]: Finished ignition-disks.service. Feb 8 23:35:06.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:06.558355 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:35:06.575470 kernel: audit: type=1130 audit(1707435306.557:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:06.571555 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:35:06.575473 systemd[1]: Reached target local-fs.target. Feb 8 23:35:06.577248 systemd[1]: Reached target sysinit.target. Feb 8 23:35:06.580960 systemd[1]: Reached target basic.target. Feb 8 23:35:06.583478 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:35:06.642429 systemd-fsck[871]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 8 23:35:06.645905 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:35:06.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:06.651941 systemd[1]: Mounting sysroot.mount... Feb 8 23:35:06.668263 kernel: audit: type=1130 audit(1707435306.650:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:06.676451 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:35:06.676849 systemd[1]: Mounted sysroot.mount. Feb 8 23:35:06.678569 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:35:06.709131 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:35:06.713119 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 8 23:35:06.719540 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:35:06.719656 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:35:06.728221 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:35:06.777076 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:35:06.782876 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:35:06.794387 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (881) Feb 8 23:35:06.802893 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:35:06.802925 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:35:06.802940 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:35:06.807354 initrd-setup-root[886]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:35:06.813512 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:35:06.833472 initrd-setup-root[912]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:35:06.838516 initrd-setup-root[920]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:35:06.861924 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:35:07.256576 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:35:07.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:07.271290 systemd[1]: Starting ignition-mount.service... Feb 8 23:35:07.274645 kernel: audit: type=1130 audit(1707435307.258:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:07.277555 systemd[1]: Starting sysroot-boot.service... Feb 8 23:35:07.299783 systemd[1]: Finished sysroot-boot.service. Feb 8 23:35:07.301673 ignition[947]: INFO : Ignition 2.14.0 Feb 8 23:35:07.301673 ignition[947]: INFO : Stage: mount Feb 8 23:35:07.301673 ignition[947]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:35:07.301673 ignition[947]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:35:07.313342 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:35:07.313342 ignition[947]: INFO : mount: mount passed Feb 8 23:35:07.313342 ignition[947]: INFO : Ignition finished successfully Feb 8 23:35:07.343065 kernel: audit: type=1130 audit(1707435307.313:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:07.343090 kernel: audit: type=1130 audit(1707435307.329:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:07.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:07.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:07.325379 systemd[1]: Finished ignition-mount.service. Feb 8 23:35:07.428994 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 8 23:35:07.429108 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 8 23:35:08.072065 coreos-metadata[880]: Feb 08 23:35:08.071 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 8 23:35:08.088876 coreos-metadata[880]: Feb 08 23:35:08.088 INFO Fetch successful Feb 8 23:35:08.121739 coreos-metadata[880]: Feb 08 23:35:08.121 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 8 23:35:08.138520 coreos-metadata[880]: Feb 08 23:35:08.138 INFO Fetch successful Feb 8 23:35:08.160906 coreos-metadata[880]: Feb 08 23:35:08.160 INFO wrote hostname ci-3510.3.2-a-9933156126 to /sysroot/etc/hostname Feb 8 23:35:08.166227 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 8 23:35:08.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:08.181381 systemd[1]: Starting ignition-files.service... Feb 8 23:35:08.185933 kernel: audit: type=1130 audit(1707435308.168:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:08.189103 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:35:08.203742 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (959) Feb 8 23:35:08.203776 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:35:08.203789 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:35:08.210644 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:35:08.215409 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:35:08.228612 ignition[978]: INFO : Ignition 2.14.0 Feb 8 23:35:08.228612 ignition[978]: INFO : Stage: files Feb 8 23:35:08.232280 ignition[978]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:35:08.232280 ignition[978]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:35:08.244268 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:35:08.259747 ignition[978]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:35:08.262885 ignition[978]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:35:08.262885 ignition[978]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:35:08.335152 ignition[978]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:35:08.339019 ignition[978]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:35:08.339019 ignition[978]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:35:08.335699 unknown[978]: wrote ssh authorized keys file for user: core Feb 8 23:35:08.351515 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:35:08.356056 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:35:13.992132 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:35:14.121280 ignition[978]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 8 23:35:14.128799 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:35:14.128799 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:35:14.128799 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 8 23:35:14.360211 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:35:14.471631 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:35:14.476668 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 8 23:35:14.476668 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 8 23:35:14.476668 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:35:14.476668 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 8 23:35:14.972890 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 8 23:35:15.114385 ignition[978]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 8 23:35:15.121932 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:35:15.121932 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:35:15.121932 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:35:15.516812 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 8 23:35:16.055845 ignition[978]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 8 23:35:16.063819 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:35:16.063819 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:35:16.063819 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:35:16.186216 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 8 23:35:16.355293 ignition[978]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 8 23:35:16.363722 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:35:16.363722 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:35:16.363722 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 8 23:35:16.481018 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 8 23:35:16.655057 ignition[978]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 8 23:35:16.665354 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:35:16.665354 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:35:16.665354 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:35:16.665354 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:35:16.665354 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:35:16.665354 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:35:16.665354 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:35:16.665354 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:35:16.665354 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:35:16.665354 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:35:16.665354 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:35:16.665354 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:35:16.665354 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:35:16.665354 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:35:16.665354 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:35:16.751198 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (978) Feb 8 23:35:16.751227 kernel: audit: type=1130 audit(1707435316.707:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:16.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:16.751293 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem847329850" Feb 8 23:35:16.751293 ignition[978]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem847329850": device or resource busy Feb 8 23:35:16.751293 ignition[978]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem847329850", trying btrfs: device or resource busy Feb 8 23:35:16.751293 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem847329850" Feb 8 23:35:16.751293 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem847329850" Feb 8 23:35:16.751293 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem847329850" Feb 8 23:35:16.751293 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem847329850" Feb 8 23:35:16.751293 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:35:16.751293 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:35:16.751293 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:35:16.751293 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3013967191" Feb 8 23:35:16.751293 ignition[978]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3013967191": device or resource busy Feb 8 23:35:16.751293 ignition[978]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3013967191", trying btrfs: device or resource busy Feb 8 23:35:16.751293 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3013967191" Feb 8 23:35:16.679289 systemd[1]: mnt-oem847329850.mount: Deactivated successfully. Feb 8 23:35:16.815281 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3013967191" Feb 8 23:35:16.815281 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem3013967191" Feb 8 23:35:16.815281 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem3013967191" Feb 8 23:35:16.815281 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:35:16.815281 ignition[978]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 8 23:35:16.815281 ignition[978]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 8 23:35:16.815281 ignition[978]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 8 23:35:16.815281 ignition[978]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 8 23:35:16.815281 ignition[978]: INFO : files: op(1a): [started] processing unit "containerd.service" Feb 8 23:35:16.815281 ignition[978]: INFO : files: op(1a): op(1b): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 8 23:35:16.815281 ignition[978]: INFO : files: op(1a): op(1b): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 8 23:35:16.815281 ignition[978]: INFO : files: op(1a): [finished] processing unit "containerd.service" Feb 8 23:35:16.815281 ignition[978]: INFO : files: op(1c): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:35:16.815281 ignition[978]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:35:16.815281 ignition[978]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:35:16.815281 ignition[978]: INFO : files: op(1c): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:35:16.815281 ignition[978]: INFO : files: op(1e): [started] processing unit "prepare-critools.service" Feb 8 23:35:16.815281 ignition[978]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:35:16.912372 kernel: audit: type=1130 audit(1707435316.843:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:16.912402 kernel: audit: type=1130 audit(1707435316.858:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:16.912417 kernel: audit: type=1131 audit(1707435316.858:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:16.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:16.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:16.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:16.693978 systemd[1]: mnt-oem3013967191.mount: Deactivated successfully. Feb 8 23:35:16.914757 ignition[978]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:35:16.914757 ignition[978]: INFO : files: op(1e): [finished] processing unit "prepare-critools.service" Feb 8 23:35:16.914757 ignition[978]: INFO : files: op(20): [started] processing unit "prepare-helm.service" Feb 8 23:35:16.914757 ignition[978]: INFO : files: op(20): op(21): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:35:16.914757 ignition[978]: INFO : files: op(20): op(21): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:35:16.914757 ignition[978]: INFO : files: op(20): [finished] processing unit "prepare-helm.service" Feb 8 23:35:16.914757 ignition[978]: INFO : files: op(22): [started] setting preset to enabled for "waagent.service" Feb 8 23:35:16.914757 ignition[978]: INFO : files: op(22): [finished] setting preset to enabled for "waagent.service" Feb 8 23:35:16.914757 ignition[978]: INFO : files: op(23): [started] setting preset to enabled for "nvidia.service" Feb 8 23:35:16.914757 ignition[978]: INFO : files: op(23): [finished] setting preset to enabled for "nvidia.service" Feb 8 23:35:16.914757 ignition[978]: INFO : files: op(24): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:35:16.914757 ignition[978]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:35:16.914757 ignition[978]: INFO : files: op(25): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:35:16.914757 ignition[978]: INFO : files: op(25): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:35:16.914757 ignition[978]: INFO : files: op(26): [started] setting preset to enabled for "prepare-helm.service" Feb 8 23:35:16.914757 ignition[978]: INFO : files: op(26): [finished] setting preset to enabled for "prepare-helm.service" Feb 8 23:35:16.914757 ignition[978]: INFO : files: createResultFile: createFiles: op(27): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:35:16.914757 ignition[978]: INFO : files: createResultFile: createFiles: op(27): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:35:16.914757 ignition[978]: INFO : files: files passed Feb 8 23:35:16.914757 ignition[978]: INFO : Ignition finished successfully Feb 8 23:35:16.701114 systemd[1]: Finished ignition-files.service. Feb 8 23:35:16.978607 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:35:16.720581 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:35:16.723277 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:35:16.765758 systemd[1]: Starting ignition-quench.service... Feb 8 23:35:16.840412 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:35:16.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:16.843603 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:35:17.020884 kernel: audit: type=1130 audit(1707435316.996:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.020911 kernel: audit: type=1131 audit(1707435316.996:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:16.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:16.843678 systemd[1]: Finished ignition-quench.service. Feb 8 23:35:16.858551 systemd[1]: Reached target ignition-complete.target. Feb 8 23:35:16.978974 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:35:16.994458 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:35:16.994542 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:35:16.997629 systemd[1]: Reached target initrd-fs.target. Feb 8 23:35:17.026215 systemd[1]: Reached target initrd.target. Feb 8 23:35:17.036094 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:35:17.037041 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:35:17.049708 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:35:17.067394 kernel: audit: type=1130 audit(1707435317.053:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.065093 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:35:17.075067 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:35:17.078998 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:35:17.080925 systemd[1]: Stopped target timers.target. Feb 8 23:35:17.084448 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:35:17.084565 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:35:17.088216 systemd[1]: Stopped target initrd.target. Feb 8 23:35:17.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.103469 kernel: audit: type=1131 audit(1707435317.087:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.105365 systemd[1]: Stopped target basic.target. Feb 8 23:35:17.108992 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:35:17.112909 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:35:17.116991 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:35:17.121005 systemd[1]: Stopped target remote-fs.target. Feb 8 23:35:17.123018 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:35:17.126581 systemd[1]: Stopped target sysinit.target. Feb 8 23:35:17.129995 systemd[1]: Stopped target local-fs.target. Feb 8 23:35:17.133414 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:35:17.138802 systemd[1]: Stopped target swap.target. Feb 8 23:35:17.142272 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:35:17.144473 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:35:17.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.148364 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:35:17.162525 kernel: audit: type=1131 audit(1707435317.147:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.162676 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:35:17.164838 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:35:17.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.168579 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:35:17.183262 kernel: audit: type=1131 audit(1707435317.168:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.168718 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:35:17.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.185491 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:35:17.187624 systemd[1]: Stopped ignition-files.service. Feb 8 23:35:17.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.191098 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 8 23:35:17.193597 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 8 23:35:17.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.207077 iscsid[827]: iscsid shutting down. Feb 8 23:35:17.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.198559 systemd[1]: Stopping ignition-mount.service... Feb 8 23:35:17.210847 ignition[1016]: INFO : Ignition 2.14.0 Feb 8 23:35:17.210847 ignition[1016]: INFO : Stage: umount Feb 8 23:35:17.210847 ignition[1016]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:35:17.210847 ignition[1016]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:35:17.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.200599 systemd[1]: Stopping iscsid.service... Feb 8 23:35:17.225574 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:35:17.225574 ignition[1016]: INFO : umount: umount passed Feb 8 23:35:17.225574 ignition[1016]: INFO : Ignition finished successfully Feb 8 23:35:17.203092 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:35:17.204950 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:35:17.205184 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:35:17.208956 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:35:17.210962 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:35:17.242962 systemd[1]: iscsid.service: Deactivated successfully. Feb 8 23:35:17.244992 systemd[1]: Stopped iscsid.service. Feb 8 23:35:17.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.248571 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:35:17.250699 systemd[1]: Stopped ignition-mount.service. Feb 8 23:35:17.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.254529 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:35:17.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.254651 systemd[1]: Stopped ignition-disks.service. Feb 8 23:35:17.258864 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:35:17.258914 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:35:17.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.264312 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 8 23:35:17.266430 systemd[1]: Stopped ignition-fetch.service. Feb 8 23:35:17.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.269660 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:35:17.269705 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:35:17.273217 systemd[1]: Stopped target paths.target. Feb 8 23:35:17.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.274795 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:35:17.275412 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:35:17.279417 systemd[1]: Stopped target slices.target. Feb 8 23:35:17.281314 systemd[1]: Stopped target sockets.target. Feb 8 23:35:17.284803 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:35:17.284848 systemd[1]: Closed iscsid.socket. Feb 8 23:35:17.286532 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:35:17.286575 systemd[1]: Stopped ignition-setup.service. Feb 8 23:35:17.288479 systemd[1]: Stopping iscsiuio.service... Feb 8 23:35:17.308920 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:35:17.311486 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:35:17.313508 systemd[1]: Stopped iscsiuio.service. Feb 8 23:35:17.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.316909 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:35:17.318904 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:35:17.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.322731 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:35:17.324715 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:35:17.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.328856 systemd[1]: Stopped target network.target. Feb 8 23:35:17.332265 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:35:17.332310 systemd[1]: Closed iscsiuio.socket. Feb 8 23:35:17.337026 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:35:17.337079 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:35:17.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.342745 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:35:17.346407 systemd-networkd[815]: eth0: DHCPv6 lease lost Feb 8 23:35:17.346546 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:35:17.351781 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:35:17.353892 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:35:17.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.358784 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:35:17.360997 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:35:17.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.364000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:35:17.365165 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:35:17.365215 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:35:17.371266 systemd[1]: Stopping network-cleanup.service... Feb 8 23:35:17.372000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:35:17.374867 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:35:17.374927 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:35:17.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.380716 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:35:17.380765 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:35:17.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.386559 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:35:17.386609 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:35:17.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.393013 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:35:17.396613 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:35:17.398724 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:35:17.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.403168 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:35:17.403245 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:35:17.407869 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:35:17.411786 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:35:17.415430 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:35:17.415480 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:35:17.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.420945 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:35:17.420991 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:35:17.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.426445 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:35:17.426490 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:35:17.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.432843 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:35:17.434869 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 8 23:35:17.434932 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 8 23:35:17.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.437363 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:35:17.437429 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:35:17.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.443736 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:35:17.445607 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:35:17.451685 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:35:17.455840 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:35:17.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.477382 kernel: hv_netvsc 0022489d-1f12-0022-489d-1f120022489d eth0: Data path switched from VF: enP2474s1 Feb 8 23:35:17.494712 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:35:17.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:17.494797 systemd[1]: Stopped network-cleanup.service. Feb 8 23:35:17.496977 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:35:17.499933 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:35:17.542396 systemd[1]: Switching root. Feb 8 23:35:17.546000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:35:17.546000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:35:17.547000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:35:17.547000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:35:17.547000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:35:17.567247 systemd-journald[183]: Journal stopped Feb 8 23:35:31.146207 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 8 23:35:31.146267 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:35:31.146289 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:35:31.146299 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:35:31.148221 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:35:31.148241 kernel: SELinux: policy capability open_perms=1 Feb 8 23:35:31.148264 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:35:31.148277 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:35:31.148289 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:35:31.148302 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:35:31.148317 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:35:31.148331 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:35:31.148348 systemd[1]: Successfully loaded SELinux policy in 294.933ms. Feb 8 23:35:31.148377 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.657ms. Feb 8 23:35:31.148408 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:35:31.148422 systemd[1]: Detected virtualization microsoft. Feb 8 23:35:31.148435 systemd[1]: Detected architecture x86-64. Feb 8 23:35:31.148447 systemd[1]: Detected first boot. Feb 8 23:35:31.148464 systemd[1]: Hostname set to <ci-3510.3.2-a-9933156126>. Feb 8 23:35:31.148478 systemd[1]: Initializing machine ID from random generator. Feb 8 23:35:31.148493 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:35:31.148509 kernel: kauditd_printk_skb: 42 callbacks suppressed Feb 8 23:35:31.148527 kernel: audit: type=1400 audit(1707435322.622:90): avc: denied { associate } for pid=1067 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:35:31.148548 kernel: audit: type=1300 audit(1707435322.622:90): arch=c000003e syscall=188 success=yes exit=0 a0=c0001076b2 a1=c00002cb58 a2=c00002aa40 a3=32 items=0 ppid=1050 pid=1067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:35:31.148570 kernel: audit: type=1327 audit(1707435322.622:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:35:31.148587 kernel: audit: type=1400 audit(1707435322.629:91): avc: denied { associate } for pid=1067 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:35:31.148601 kernel: audit: type=1300 audit(1707435322.629:91): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000107789 a2=1ed a3=0 items=2 ppid=1050 pid=1067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:35:31.148612 kernel: audit: type=1307 audit(1707435322.629:91): cwd="/" Feb 8 23:35:31.148623 kernel: audit: type=1302 audit(1707435322.629:91): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:31.148635 kernel: audit: type=1302 audit(1707435322.629:91): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:31.148649 kernel: audit: type=1327 audit(1707435322.629:91): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:35:31.148659 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:35:31.148671 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:35:31.148684 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:35:31.148695 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:35:31.148706 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:35:31.148719 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:35:31.148732 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:35:31.148744 systemd[1]: Created slice system-getty.slice. Feb 8 23:35:31.148756 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:35:31.148771 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:35:31.148783 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:35:31.148793 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:35:31.148806 systemd[1]: Created slice user.slice. Feb 8 23:35:31.148820 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:35:31.148829 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:35:31.148844 systemd[1]: Set up automount boot.automount. Feb 8 23:35:31.148856 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:35:31.148866 systemd[1]: Reached target integritysetup.target. Feb 8 23:35:31.148878 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:35:31.148890 systemd[1]: Reached target remote-fs.target. Feb 8 23:35:31.148900 systemd[1]: Reached target slices.target. Feb 8 23:35:31.148912 systemd[1]: Reached target swap.target. Feb 8 23:35:31.148924 systemd[1]: Reached target torcx.target. Feb 8 23:35:31.148937 systemd[1]: Reached target veritysetup.target. Feb 8 23:35:31.148948 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:35:31.148961 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:35:31.148972 kernel: audit: type=1400 audit(1707435330.774:92): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:35:31.148983 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:35:31.148996 kernel: audit: type=1335 audit(1707435330.774:93): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 8 23:35:31.149006 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:35:31.149017 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:35:31.149031 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:35:31.149040 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:35:31.149050 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:35:31.149061 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:35:31.149076 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:35:31.149087 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:35:31.149096 systemd[1]: Mounting media.mount... Feb 8 23:35:31.149107 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:35:31.149119 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:35:31.149129 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:35:31.149141 systemd[1]: Mounting tmp.mount... Feb 8 23:35:31.149154 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:35:31.149164 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:35:31.149179 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:35:31.149191 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:35:31.149202 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:35:31.149213 systemd[1]: Starting modprobe@drm.service... Feb 8 23:35:31.149225 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:35:31.149236 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:35:31.149248 systemd[1]: Starting modprobe@loop.service... Feb 8 23:35:31.149260 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:35:31.149272 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 8 23:35:31.149286 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 8 23:35:31.149297 systemd[1]: Starting systemd-journald.service... Feb 8 23:35:31.149310 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:35:31.149323 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:35:31.149333 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:35:31.149345 kernel: loop: module loaded Feb 8 23:35:31.149358 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:35:31.149970 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:35:31.149998 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:35:31.150015 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:35:31.150027 systemd[1]: Mounted media.mount. Feb 8 23:35:31.150038 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:35:31.150051 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:35:31.150064 systemd[1]: Mounted tmp.mount. Feb 8 23:35:31.150075 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:35:31.150088 kernel: audit: type=1130 audit(1707435331.057:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.150101 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:35:31.150114 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:35:31.150127 kernel: audit: type=1130 audit(1707435331.083:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.150139 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:35:31.150150 kernel: fuse: init (API version 7.34) Feb 8 23:35:31.150161 kernel: audit: type=1130 audit(1707435331.107:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.150173 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:35:31.150185 kernel: audit: type=1131 audit(1707435331.107:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.150198 kernel: audit: type=1305 audit(1707435331.123:98): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:35:31.150210 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:35:31.150222 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:35:31.150241 systemd-journald[1165]: Journal started Feb 8 23:35:31.150296 systemd-journald[1165]: Runtime Journal (/run/log/journal/e173f4a34e2b497c8bffebee8d299168) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:35:30.774000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 8 23:35:31.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.123000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:35:31.160402 systemd[1]: Finished modprobe@drm.service. Feb 8 23:35:31.160446 kernel: audit: type=1300 audit(1707435331.123:98): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff4292a130 a2=4000 a3=7fff4292a1cc items=0 ppid=1 pid=1165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:35:31.123000 audit[1165]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff4292a130 a2=4000 a3=7fff4292a1cc items=0 ppid=1 pid=1165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:35:31.178514 kernel: audit: type=1327 audit(1707435331.123:98): proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:35:31.123000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:35:31.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.191366 kernel: audit: type=1130 audit(1707435331.145:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.191451 systemd[1]: Started systemd-journald.service. Feb 8 23:35:31.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.198478 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:35:31.199057 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:35:31.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.201969 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:35:31.202289 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:35:31.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.205402 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:35:31.205723 systemd[1]: Finished modprobe@loop.service. Feb 8 23:35:31.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.208512 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:35:31.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.211457 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:35:31.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.214753 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:35:31.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.217567 systemd[1]: Reached target network-pre.target. Feb 8 23:35:31.221775 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:35:31.232440 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:35:31.235185 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:35:31.237384 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:35:31.240706 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:35:31.242528 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:35:31.244006 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:35:31.246179 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:35:31.247741 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:35:31.250929 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:35:31.255823 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:35:31.259815 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:35:31.272009 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:35:31.274725 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:35:31.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.277040 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:35:31.280656 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:35:31.293071 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 8 23:35:31.298944 systemd-journald[1165]: Time spent on flushing to /var/log/journal/e173f4a34e2b497c8bffebee8d299168 is 22.119ms for 1151 entries. Feb 8 23:35:31.298944 systemd-journald[1165]: System Journal (/var/log/journal/e173f4a34e2b497c8bffebee8d299168) is 8.0M, max 2.6G, 2.6G free. Feb 8 23:35:31.376422 systemd-journald[1165]: Received client request to flush runtime journal. Feb 8 23:35:31.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.324194 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:35:31.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.377552 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:35:31.825160 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:35:31.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.830009 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:35:32.120725 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:35:32.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:32.621092 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:35:32.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:32.625245 systemd[1]: Starting systemd-udevd.service... Feb 8 23:35:32.645145 systemd-udevd[1230]: Using default interface naming scheme 'v252'. Feb 8 23:35:32.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:32.882929 systemd[1]: Started systemd-udevd.service. Feb 8 23:35:32.887794 systemd[1]: Starting systemd-networkd.service... Feb 8 23:35:32.925600 systemd[1]: Found device dev-ttyS0.device. Feb 8 23:35:32.997249 kernel: hv_utils: Registering HyperV Utility Driver Feb 8 23:35:32.997341 kernel: hv_vmbus: registering driver hv_utils Feb 8 23:35:33.010915 kernel: hv_utils: Heartbeat IC version 3.0 Feb 8 23:35:33.010998 kernel: hv_utils: Shutdown IC version 3.2 Feb 8 23:35:33.011025 kernel: hv_utils: TimeSync IC version 4.0 Feb 8 23:35:33.718682 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:35:33.727770 kernel: hv_vmbus: registering driver hyperv_fb Feb 8 23:35:32.989000 audit[1243]: AVC avc: denied { confidentiality } for pid=1243 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:35:33.737693 kernel: hv_vmbus: registering driver hv_balloon Feb 8 23:35:33.755226 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 8 23:35:33.759351 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 8 23:35:33.759413 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 8 23:35:33.757433 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:35:33.764480 kernel: Console: switching to colour dummy device 80x25 Feb 8 23:35:33.771199 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:35:32.989000 audit[1243]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d3df9b2740 a1=f884 a2=7f8e43fffbc5 a3=5 items=12 ppid=1230 pid=1243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:35:32.989000 audit: CWD cwd="/" Feb 8 23:35:32.989000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:32.989000 audit: PATH item=1 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:32.989000 audit: PATH item=2 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:32.989000 audit: PATH item=3 name=(null) inode=15611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:32.989000 audit: PATH item=4 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:32.989000 audit: PATH item=5 name=(null) inode=15612 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:32.989000 audit: PATH item=6 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:32.989000 audit: PATH item=7 name=(null) inode=15613 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:32.989000 audit: PATH item=8 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:32.989000 audit: PATH item=9 name=(null) inode=15614 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:32.989000 audit: PATH item=10 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:32.989000 audit: PATH item=11 name=(null) inode=15615 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:32.989000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:35:33.836693 systemd[1]: Started systemd-userdbd.service. Feb 8 23:35:33.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:33.951916 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1247) Feb 8 23:35:33.992691 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 8 23:35:34.046563 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 8 23:35:34.059044 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:35:34.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.063356 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:35:34.209160 systemd-networkd[1245]: lo: Link UP Feb 8 23:35:34.209171 systemd-networkd[1245]: lo: Gained carrier Feb 8 23:35:34.209797 systemd-networkd[1245]: Enumeration completed Feb 8 23:35:34.209945 systemd[1]: Started systemd-networkd.service. Feb 8 23:35:34.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.213857 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:35:34.241529 systemd-networkd[1245]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:35:34.294694 kernel: mlx5_core 09aa:00:02.0 enP2474s1: Link up Feb 8 23:35:34.334697 kernel: hv_netvsc 0022489d-1f12-0022-489d-1f120022489d eth0: Data path switched to VF: enP2474s1 Feb 8 23:35:34.336035 systemd-networkd[1245]: enP2474s1: Link UP Feb 8 23:35:34.336216 systemd-networkd[1245]: eth0: Link UP Feb 8 23:35:34.336223 systemd-networkd[1245]: eth0: Gained carrier Feb 8 23:35:34.340961 systemd-networkd[1245]: enP2474s1: Gained carrier Feb 8 23:35:34.364746 lvm[1308]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:35:34.377805 systemd-networkd[1245]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:35:34.384855 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:35:34.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.387408 systemd[1]: Reached target cryptsetup.target. Feb 8 23:35:34.391047 systemd[1]: Starting lvm2-activation.service... Feb 8 23:35:34.396859 lvm[1311]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:35:34.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.421713 systemd[1]: Finished lvm2-activation.service. Feb 8 23:35:34.424561 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:35:34.427074 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:35:34.427112 systemd[1]: Reached target local-fs.target. Feb 8 23:35:34.429433 systemd[1]: Reached target machines.target. Feb 8 23:35:34.432777 systemd[1]: Starting ldconfig.service... Feb 8 23:35:34.435105 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:35:34.435206 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:35:34.436398 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:35:34.439516 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:35:34.443407 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:35:34.445942 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:35:34.446045 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:35:34.447182 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:35:34.988627 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1314 (bootctl) Feb 8 23:35:34.990681 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:35:34.993387 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:35:35.753949 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:35:35.757125 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:35:35.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:35.760071 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:35:36.188896 systemd-networkd[1245]: eth0: Gained IPv6LL Feb 8 23:35:36.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:36.194655 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:35:36.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:36.953135 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:35:36.954173 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:35:36.959602 kernel: kauditd_printk_skb: 44 callbacks suppressed Feb 8 23:35:36.959707 kernel: audit: type=1130 audit(1707435336.956:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:37.619382 systemd-fsck[1323]: fsck.fat 4.2 (2021-01-31) Feb 8 23:35:37.619382 systemd-fsck[1323]: /dev/sda1: 789 files, 115332/258078 clusters Feb 8 23:35:37.621846 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:35:37.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:37.627676 systemd[1]: Mounting boot.mount... Feb 8 23:35:37.638752 kernel: audit: type=1130 audit(1707435337.624:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:37.646066 systemd[1]: Mounted boot.mount. Feb 8 23:35:37.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:37.662646 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:35:37.675705 kernel: audit: type=1130 audit(1707435337.664:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:37.793285 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:35:37.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:37.797541 systemd[1]: Starting audit-rules.service... Feb 8 23:35:37.808791 kernel: audit: type=1130 audit(1707435337.795:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:37.811133 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:35:37.815323 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:35:37.820018 systemd[1]: Starting systemd-resolved.service... Feb 8 23:35:37.824475 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:35:37.828395 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:35:37.831311 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:35:37.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:37.836772 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:35:37.844090 kernel: audit: type=1130 audit(1707435337.832:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:37.851000 audit[1343]: SYSTEM_BOOT pid=1343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:35:37.860553 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:35:37.863688 kernel: audit: type=1127 audit(1707435337.851:134): pid=1343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:35:37.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:37.876687 kernel: audit: type=1130 audit(1707435337.863:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:37.950926 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:35:37.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:37.968732 kernel: audit: type=1130 audit(1707435337.953:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:37.975213 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:35:37.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:37.977759 systemd[1]: Reached target time-set.target. Feb 8 23:35:37.991192 kernel: audit: type=1130 audit(1707435337.976:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:38.088434 systemd-resolved[1341]: Positive Trust Anchors: Feb 8 23:35:38.088460 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:35:38.088509 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:35:38.127629 systemd-timesyncd[1342]: Contacted time server 193.1.8.106:123 (0.flatcar.pool.ntp.org). Feb 8 23:35:38.127721 systemd-timesyncd[1342]: Initial clock synchronization to Thu 2024-02-08 23:35:38.129217 UTC. Feb 8 23:35:38.166000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:35:38.168275 systemd[1]: Finished audit-rules.service. Feb 8 23:35:38.173909 augenrules[1360]: No rules Feb 8 23:35:38.166000 audit[1360]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe0378a450 a2=420 a3=0 items=0 ppid=1336 pid=1360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:35:38.166000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:35:38.177656 kernel: audit: type=1305 audit(1707435338.166:138): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:35:38.209296 systemd-resolved[1341]: Using system hostname 'ci-3510.3.2-a-9933156126'. Feb 8 23:35:38.210983 systemd[1]: Started systemd-resolved.service. Feb 8 23:35:38.213414 systemd[1]: Reached target network.target. Feb 8 23:35:38.215560 systemd[1]: Reached target network-online.target. Feb 8 23:35:38.217920 systemd[1]: Reached target nss-lookup.target. Feb 8 23:35:43.908228 ldconfig[1313]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:35:43.917925 systemd[1]: Finished ldconfig.service. Feb 8 23:35:43.922341 systemd[1]: Starting systemd-update-done.service... Feb 8 23:35:43.943410 systemd[1]: Finished systemd-update-done.service. Feb 8 23:35:43.946011 systemd[1]: Reached target sysinit.target. Feb 8 23:35:43.948022 systemd[1]: Started motdgen.path. Feb 8 23:35:43.949639 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:35:43.952349 systemd[1]: Started logrotate.timer. Feb 8 23:35:43.954186 systemd[1]: Started mdadm.timer. Feb 8 23:35:43.955810 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:35:43.958777 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:35:43.958824 systemd[1]: Reached target paths.target. Feb 8 23:35:43.960597 systemd[1]: Reached target timers.target. Feb 8 23:35:43.962870 systemd[1]: Listening on dbus.socket. Feb 8 23:35:43.965635 systemd[1]: Starting docker.socket... Feb 8 23:35:43.969215 systemd[1]: Listening on sshd.socket. Feb 8 23:35:43.971419 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:35:43.971884 systemd[1]: Listening on docker.socket. Feb 8 23:35:43.973841 systemd[1]: Reached target sockets.target. Feb 8 23:35:43.975824 systemd[1]: Reached target basic.target. Feb 8 23:35:43.977891 systemd[1]: System is tainted: cgroupsv1 Feb 8 23:35:43.977946 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:35:43.977975 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:35:43.978991 systemd[1]: Starting containerd.service... Feb 8 23:35:43.982406 systemd[1]: Starting dbus.service... Feb 8 23:35:43.985578 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:35:43.988657 systemd[1]: Starting extend-filesystems.service... Feb 8 23:35:43.990939 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:35:43.992286 systemd[1]: Starting motdgen.service... Feb 8 23:35:43.995733 systemd[1]: Started nvidia.service. Feb 8 23:35:43.999366 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:35:44.003439 systemd[1]: Starting prepare-critools.service... Feb 8 23:35:44.006719 systemd[1]: Starting prepare-helm.service... Feb 8 23:35:44.010059 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:35:44.013597 systemd[1]: Starting sshd-keygen.service... Feb 8 23:35:44.022387 systemd[1]: Starting systemd-logind.service... Feb 8 23:35:44.025729 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:35:44.025816 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:35:44.029242 systemd[1]: Starting update-engine.service... Feb 8 23:35:44.032537 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:35:44.048035 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:35:44.048327 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:35:44.075617 extend-filesystems[1375]: Found sda Feb 8 23:35:44.075617 extend-filesystems[1375]: Found sda1 Feb 8 23:35:44.075617 extend-filesystems[1375]: Found sda2 Feb 8 23:35:44.075617 extend-filesystems[1375]: Found sda3 Feb 8 23:35:44.075617 extend-filesystems[1375]: Found usr Feb 8 23:35:44.075617 extend-filesystems[1375]: Found sda4 Feb 8 23:35:44.075617 extend-filesystems[1375]: Found sda6 Feb 8 23:35:44.075617 extend-filesystems[1375]: Found sda7 Feb 8 23:35:44.075617 extend-filesystems[1375]: Found sda9 Feb 8 23:35:44.075617 extend-filesystems[1375]: Checking size of /dev/sda9 Feb 8 23:35:44.099626 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:35:44.101774 jq[1374]: false Feb 8 23:35:44.102559 jq[1396]: true Feb 8 23:35:44.101772 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:35:44.112365 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:35:44.112644 systemd[1]: Finished motdgen.service. Feb 8 23:35:44.135613 jq[1412]: true Feb 8 23:35:44.152657 tar[1398]: ./ Feb 8 23:35:44.152657 tar[1398]: ./macvlan Feb 8 23:35:44.157151 tar[1401]: linux-amd64/helm Feb 8 23:35:44.159589 tar[1400]: crictl Feb 8 23:35:44.161718 env[1405]: time="2024-02-08T23:35:44.161035174Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:35:44.183679 extend-filesystems[1375]: Old size kept for /dev/sda9 Feb 8 23:35:44.183679 extend-filesystems[1375]: Found sr0 Feb 8 23:35:44.180982 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:35:44.181293 systemd[1]: Finished extend-filesystems.service. Feb 8 23:35:44.230752 systemd-logind[1392]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:35:44.232754 systemd-logind[1392]: New seat seat0. Feb 8 23:35:44.332156 tar[1398]: ./static Feb 8 23:35:44.335717 env[1405]: time="2024-02-08T23:35:44.335635826Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:35:44.340746 dbus-daemon[1373]: [system] SELinux support is enabled Feb 8 23:35:44.340980 systemd[1]: Started dbus.service. Feb 8 23:35:44.341288 env[1405]: time="2024-02-08T23:35:44.341264657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:35:44.345901 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:35:44.345940 systemd[1]: Reached target system-config.target. Feb 8 23:35:44.348506 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:35:44.348538 systemd[1]: Reached target user-config.target. Feb 8 23:35:44.353955 env[1405]: time="2024-02-08T23:35:44.352015879Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:35:44.353955 env[1405]: time="2024-02-08T23:35:44.352054982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:35:44.353955 env[1405]: time="2024-02-08T23:35:44.352371606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:35:44.353955 env[1405]: time="2024-02-08T23:35:44.352393108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:35:44.353955 env[1405]: time="2024-02-08T23:35:44.352412409Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:35:44.353955 env[1405]: time="2024-02-08T23:35:44.352425810Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:35:44.353955 env[1405]: time="2024-02-08T23:35:44.352512117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:35:44.353955 env[1405]: time="2024-02-08T23:35:44.352766236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:35:44.353955 env[1405]: time="2024-02-08T23:35:44.352998554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:35:44.353955 env[1405]: time="2024-02-08T23:35:44.353019655Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:35:44.355105 env[1405]: time="2024-02-08T23:35:44.353075660Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:35:44.355105 env[1405]: time="2024-02-08T23:35:44.353089961Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:35:44.357081 systemd[1]: Started systemd-logind.service. Feb 8 23:35:44.371161 env[1405]: time="2024-02-08T23:35:44.369910747Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:35:44.371161 env[1405]: time="2024-02-08T23:35:44.369943850Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:35:44.371161 env[1405]: time="2024-02-08T23:35:44.369972652Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:35:44.371161 env[1405]: time="2024-02-08T23:35:44.370022956Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:35:44.371161 env[1405]: time="2024-02-08T23:35:44.370105462Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:35:44.371161 env[1405]: time="2024-02-08T23:35:44.370138565Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:35:44.371161 env[1405]: time="2024-02-08T23:35:44.370156366Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:35:44.371161 env[1405]: time="2024-02-08T23:35:44.370176067Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:35:44.371161 env[1405]: time="2024-02-08T23:35:44.370221071Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:35:44.371161 env[1405]: time="2024-02-08T23:35:44.370242273Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:35:44.371161 env[1405]: time="2024-02-08T23:35:44.370257974Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:35:44.371161 env[1405]: time="2024-02-08T23:35:44.370285576Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:35:44.371161 env[1405]: time="2024-02-08T23:35:44.370408285Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:35:44.371161 env[1405]: time="2024-02-08T23:35:44.370524194Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:35:44.371696 env[1405]: time="2024-02-08T23:35:44.371092138Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:35:44.371696 env[1405]: time="2024-02-08T23:35:44.371124840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:35:44.372122 env[1405]: time="2024-02-08T23:35:44.371146942Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:35:44.372122 env[1405]: time="2024-02-08T23:35:44.371867397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:35:44.372122 env[1405]: time="2024-02-08T23:35:44.371886598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:35:44.372122 env[1405]: time="2024-02-08T23:35:44.371904300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:35:44.372122 env[1405]: time="2024-02-08T23:35:44.371978205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:35:44.372122 env[1405]: time="2024-02-08T23:35:44.371997107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:35:44.372122 env[1405]: time="2024-02-08T23:35:44.372014108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:35:44.372122 env[1405]: time="2024-02-08T23:35:44.372029909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:35:44.372122 env[1405]: time="2024-02-08T23:35:44.372057411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:35:44.372122 env[1405]: time="2024-02-08T23:35:44.372077213Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:35:44.372734 env[1405]: time="2024-02-08T23:35:44.372625355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:35:44.372734 env[1405]: time="2024-02-08T23:35:44.372678659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:35:44.372734 env[1405]: time="2024-02-08T23:35:44.372697860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:35:44.372734 env[1405]: time="2024-02-08T23:35:44.372714862Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:35:44.373935 env[1405]: time="2024-02-08T23:35:44.373879651Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:35:44.373935 env[1405]: time="2024-02-08T23:35:44.373905053Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:35:44.374131 env[1405]: time="2024-02-08T23:35:44.374064665Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:35:44.374131 env[1405]: time="2024-02-08T23:35:44.374109968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:35:44.374900 env[1405]: time="2024-02-08T23:35:44.374502198Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:35:44.405771 env[1405]: time="2024-02-08T23:35:44.375077142Z" level=info msg="Connect containerd service" Feb 8 23:35:44.405771 env[1405]: time="2024-02-08T23:35:44.375137947Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:35:44.405771 env[1405]: time="2024-02-08T23:35:44.375879304Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:35:44.405771 env[1405]: time="2024-02-08T23:35:44.376136023Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:35:44.405771 env[1405]: time="2024-02-08T23:35:44.376185627Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:35:44.405771 env[1405]: time="2024-02-08T23:35:44.376859379Z" level=info msg="Start subscribing containerd event" Feb 8 23:35:44.405771 env[1405]: time="2024-02-08T23:35:44.376957286Z" level=info msg="Start recovering state" Feb 8 23:35:44.405771 env[1405]: time="2024-02-08T23:35:44.377012190Z" level=info msg="Start event monitor" Feb 8 23:35:44.405771 env[1405]: time="2024-02-08T23:35:44.377033492Z" level=info msg="Start snapshots syncer" Feb 8 23:35:44.405771 env[1405]: time="2024-02-08T23:35:44.377042693Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:35:44.405771 env[1405]: time="2024-02-08T23:35:44.377051293Z" level=info msg="Start streaming server" Feb 8 23:35:44.405771 env[1405]: time="2024-02-08T23:35:44.400551390Z" level=info msg="containerd successfully booted in 0.240392s" Feb 8 23:35:44.406120 bash[1454]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:35:44.376323 systemd[1]: Started containerd.service. Feb 8 23:35:44.404843 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:35:44.416861 systemd[1]: nvidia.service: Deactivated successfully. Feb 8 23:35:44.462040 tar[1398]: ./vlan Feb 8 23:35:44.626121 tar[1398]: ./portmap Feb 8 23:35:44.748842 tar[1398]: ./host-local Feb 8 23:35:44.857550 tar[1398]: ./vrf Feb 8 23:35:44.901596 update_engine[1395]: I0208 23:35:44.901002 1395 main.cc:92] Flatcar Update Engine starting Feb 8 23:35:44.949185 systemd[1]: Started update-engine.service. Feb 8 23:35:44.949782 update_engine[1395]: I0208 23:35:44.949254 1395 update_check_scheduler.cc:74] Next update check in 9m9s Feb 8 23:35:44.953625 systemd[1]: Started locksmithd.service. Feb 8 23:35:44.978185 tar[1398]: ./bridge Feb 8 23:35:44.988548 systemd[1]: Finished prepare-critools.service. Feb 8 23:35:45.045189 tar[1398]: ./tuning Feb 8 23:35:45.105255 tar[1398]: ./firewall Feb 8 23:35:45.191250 tar[1398]: ./host-device Feb 8 23:35:45.255832 tar[1398]: ./sbr Feb 8 23:35:45.335185 tar[1398]: ./loopback Feb 8 23:35:45.370036 tar[1401]: linux-amd64/LICENSE Feb 8 23:35:45.370536 tar[1401]: linux-amd64/README.md Feb 8 23:35:45.377476 systemd[1]: Finished prepare-helm.service. Feb 8 23:35:45.401146 tar[1398]: ./dhcp Feb 8 23:35:45.500914 tar[1398]: ./ptp Feb 8 23:35:45.544884 tar[1398]: ./ipvlan Feb 8 23:35:45.588425 tar[1398]: ./bandwidth Feb 8 23:35:45.646736 sshd_keygen[1399]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:35:45.666845 systemd[1]: Finished sshd-keygen.service. Feb 8 23:35:45.672294 systemd[1]: Starting issuegen.service... Feb 8 23:35:45.676228 systemd[1]: Started waagent.service. Feb 8 23:35:45.685827 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:35:45.688755 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:35:45.689037 systemd[1]: Finished issuegen.service. Feb 8 23:35:45.693044 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:35:45.700243 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:35:45.704264 systemd[1]: Started getty@tty1.service. Feb 8 23:35:45.708425 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:35:45.710760 systemd[1]: Reached target getty.target. Feb 8 23:35:45.712951 systemd[1]: Reached target multi-user.target. Feb 8 23:35:45.716694 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:35:45.725856 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:35:45.726138 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:35:45.731824 systemd[1]: Startup finished in 922ms (firmware) + 26.427s (loader) + 22.161s (kernel) + 25.014s (userspace) = 1min 14.525s. Feb 8 23:35:46.083518 login[1526]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:35:46.083717 login[1525]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:35:46.108111 systemd[1]: Created slice user-500.slice. Feb 8 23:35:46.109424 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:35:46.113022 systemd-logind[1392]: New session 1 of user core. Feb 8 23:35:46.118745 systemd-logind[1392]: New session 2 of user core. Feb 8 23:35:46.123934 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:35:46.126982 systemd[1]: Starting user@500.service... Feb 8 23:35:46.132440 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:35:46.287790 systemd[1535]: Queued start job for default target default.target. Feb 8 23:35:46.288164 systemd[1535]: Reached target paths.target. Feb 8 23:35:46.288192 systemd[1535]: Reached target sockets.target. Feb 8 23:35:46.288216 systemd[1535]: Reached target timers.target. Feb 8 23:35:46.288237 systemd[1535]: Reached target basic.target. Feb 8 23:35:46.288310 systemd[1535]: Reached target default.target. Feb 8 23:35:46.288356 systemd[1535]: Startup finished in 150ms. Feb 8 23:35:46.288439 systemd[1]: Started user@500.service. Feb 8 23:35:46.290087 systemd[1]: Started session-1.scope. Feb 8 23:35:46.291074 systemd[1]: Started session-2.scope. Feb 8 23:35:46.635867 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:35:51.144993 waagent[1517]: 2024-02-08T23:35:51.144884Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 8 23:35:51.149128 waagent[1517]: 2024-02-08T23:35:51.149053Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 8 23:35:51.151961 waagent[1517]: 2024-02-08T23:35:51.151902Z INFO Daemon Daemon Python: 3.9.16 Feb 8 23:35:51.154680 waagent[1517]: 2024-02-08T23:35:51.154600Z INFO Daemon Daemon Run daemon Feb 8 23:35:51.157022 waagent[1517]: 2024-02-08T23:35:51.156965Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 8 23:35:51.169028 waagent[1517]: 2024-02-08T23:35:51.168915Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:35:51.175371 waagent[1517]: 2024-02-08T23:35:51.175274Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:35:51.213772 waagent[1517]: 2024-02-08T23:35:51.175640Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:35:51.213772 waagent[1517]: 2024-02-08T23:35:51.176379Z INFO Daemon Daemon Using waagent for provisioning Feb 8 23:35:51.213772 waagent[1517]: 2024-02-08T23:35:51.177640Z INFO Daemon Daemon Activate resource disk Feb 8 23:35:51.213772 waagent[1517]: 2024-02-08T23:35:51.178297Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 8 23:35:51.213772 waagent[1517]: 2024-02-08T23:35:51.186017Z INFO Daemon Daemon Found device: None Feb 8 23:35:51.213772 waagent[1517]: 2024-02-08T23:35:51.186608Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 8 23:35:51.213772 waagent[1517]: 2024-02-08T23:35:51.187358Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 8 23:35:51.213772 waagent[1517]: 2024-02-08T23:35:51.188934Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:35:51.213772 waagent[1517]: 2024-02-08T23:35:51.189722Z INFO Daemon Daemon Running default provisioning handler Feb 8 23:35:51.213772 waagent[1517]: 2024-02-08T23:35:51.198587Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:35:51.213772 waagent[1517]: 2024-02-08T23:35:51.200987Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:35:51.213772 waagent[1517]: 2024-02-08T23:35:51.201701Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:35:51.213772 waagent[1517]: 2024-02-08T23:35:51.202486Z INFO Daemon Daemon Copying ovf-env.xml Feb 8 23:35:51.223575 waagent[1517]: 2024-02-08T23:35:51.219935Z INFO Daemon Daemon Successfully mounted dvd Feb 8 23:35:51.322522 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 8 23:35:51.327354 waagent[1517]: 2024-02-08T23:35:51.327234Z INFO Daemon Daemon Detect protocol endpoint Feb 8 23:35:51.342121 waagent[1517]: 2024-02-08T23:35:51.327810Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:35:51.342121 waagent[1517]: 2024-02-08T23:35:51.328940Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 8 23:35:51.342121 waagent[1517]: 2024-02-08T23:35:51.329783Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 8 23:35:51.342121 waagent[1517]: 2024-02-08T23:35:51.330873Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 8 23:35:51.342121 waagent[1517]: 2024-02-08T23:35:51.331647Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 8 23:35:51.438314 waagent[1517]: 2024-02-08T23:35:51.438170Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 8 23:35:51.442451 waagent[1517]: 2024-02-08T23:35:51.442403Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 8 23:35:51.445114 waagent[1517]: 2024-02-08T23:35:51.445057Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 8 23:35:51.754275 waagent[1517]: 2024-02-08T23:35:51.754132Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 8 23:35:51.766412 waagent[1517]: 2024-02-08T23:35:51.766332Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 8 23:35:51.771255 waagent[1517]: 2024-02-08T23:35:51.766695Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 8 23:35:51.854516 waagent[1517]: 2024-02-08T23:35:51.854386Z INFO Daemon Daemon Found private key matching thumbprint 8D6A134EEC45F6CEEBA7B11F708F64A7D9E19C87 Feb 8 23:35:51.859267 waagent[1517]: 2024-02-08T23:35:51.859196Z INFO Daemon Daemon Certificate with thumbprint A39D3F0D13C9124B27D43908EB3DAC336508CFF5 has no matching private key. Feb 8 23:35:51.864126 waagent[1517]: 2024-02-08T23:35:51.864062Z INFO Daemon Daemon Fetch goal state completed Feb 8 23:35:51.916651 waagent[1517]: 2024-02-08T23:35:51.916567Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 279c927f-b555-4360-9f8a-0e9843d93937 New eTag: 3509751656365190241] Feb 8 23:35:51.922717 waagent[1517]: 2024-02-08T23:35:51.922617Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:35:51.935388 waagent[1517]: 2024-02-08T23:35:51.935323Z INFO Daemon Daemon Starting provisioning Feb 8 23:35:51.937788 waagent[1517]: 2024-02-08T23:35:51.937725Z INFO Daemon Daemon Handle ovf-env.xml. Feb 8 23:35:51.940257 waagent[1517]: 2024-02-08T23:35:51.940197Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-9933156126] Feb 8 23:35:51.960220 waagent[1517]: 2024-02-08T23:35:51.960078Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-9933156126] Feb 8 23:35:51.963776 waagent[1517]: 2024-02-08T23:35:51.963688Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 8 23:35:51.966950 waagent[1517]: 2024-02-08T23:35:51.966888Z INFO Daemon Daemon Primary interface is [eth0] Feb 8 23:35:51.980975 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 8 23:35:51.981284 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 8 23:35:51.981356 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 8 23:35:51.981645 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:35:51.986714 systemd-networkd[1245]: eth0: DHCPv6 lease lost Feb 8 23:35:51.988122 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:35:51.988422 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:35:51.991446 systemd[1]: Starting systemd-networkd.service... Feb 8 23:35:52.026791 systemd-networkd[1576]: enP2474s1: Link UP Feb 8 23:35:52.026801 systemd-networkd[1576]: enP2474s1: Gained carrier Feb 8 23:35:52.028161 systemd-networkd[1576]: eth0: Link UP Feb 8 23:35:52.028171 systemd-networkd[1576]: eth0: Gained carrier Feb 8 23:35:52.028600 systemd-networkd[1576]: lo: Link UP Feb 8 23:35:52.028609 systemd-networkd[1576]: lo: Gained carrier Feb 8 23:35:52.029113 systemd-networkd[1576]: eth0: Gained IPv6LL Feb 8 23:35:52.029396 systemd-networkd[1576]: Enumeration completed Feb 8 23:35:52.034265 waagent[1517]: 2024-02-08T23:35:52.030928Z INFO Daemon Daemon Create user account if not exists Feb 8 23:35:52.029535 systemd[1]: Started systemd-networkd.service. Feb 8 23:35:52.032517 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:35:52.035386 waagent[1517]: 2024-02-08T23:35:52.035229Z INFO Daemon Daemon User core already exists, skip useradd Feb 8 23:35:52.039746 waagent[1517]: 2024-02-08T23:35:52.038919Z INFO Daemon Daemon Configure sudoer Feb 8 23:35:52.040639 systemd-networkd[1576]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:35:52.043060 waagent[1517]: 2024-02-08T23:35:52.042991Z INFO Daemon Daemon Configure sshd Feb 8 23:35:52.046627 waagent[1517]: 2024-02-08T23:35:52.043257Z INFO Daemon Daemon Deploy ssh public key. Feb 8 23:35:52.085923 waagent[1517]: 2024-02-08T23:35:52.085805Z INFO Daemon Daemon Decode custom data Feb 8 23:35:52.088916 waagent[1517]: 2024-02-08T23:35:52.088845Z INFO Daemon Daemon Save custom data Feb 8 23:35:52.095786 systemd-networkd[1576]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:35:52.100064 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:35:53.361995 waagent[1517]: 2024-02-08T23:35:53.361902Z INFO Daemon Daemon Provisioning complete Feb 8 23:35:53.374991 waagent[1517]: 2024-02-08T23:35:53.374925Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 8 23:35:53.381659 waagent[1517]: 2024-02-08T23:35:53.375333Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 8 23:35:53.381659 waagent[1517]: 2024-02-08T23:35:53.377017Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 8 23:35:53.641283 waagent[1586]: 2024-02-08T23:35:53.641118Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 8 23:35:53.642032 waagent[1586]: 2024-02-08T23:35:53.641967Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:35:53.642208 waagent[1586]: 2024-02-08T23:35:53.642123Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:35:53.653146 waagent[1586]: 2024-02-08T23:35:53.653076Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 8 23:35:53.653304 waagent[1586]: 2024-02-08T23:35:53.653252Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 8 23:35:53.712896 waagent[1586]: 2024-02-08T23:35:53.712773Z INFO ExtHandler ExtHandler Found private key matching thumbprint 8D6A134EEC45F6CEEBA7B11F708F64A7D9E19C87 Feb 8 23:35:53.713112 waagent[1586]: 2024-02-08T23:35:53.713051Z INFO ExtHandler ExtHandler Certificate with thumbprint A39D3F0D13C9124B27D43908EB3DAC336508CFF5 has no matching private key. Feb 8 23:35:53.713348 waagent[1586]: 2024-02-08T23:35:53.713297Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 8 23:35:53.732378 waagent[1586]: 2024-02-08T23:35:53.732315Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: f60eb435-d64c-4b91-a48d-f35f5f95ad54 New eTag: 3509751656365190241] Feb 8 23:35:53.732966 waagent[1586]: 2024-02-08T23:35:53.732908Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:35:53.799984 waagent[1586]: 2024-02-08T23:35:53.799844Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:35:53.823072 waagent[1586]: 2024-02-08T23:35:53.822994Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1586 Feb 8 23:35:53.826427 waagent[1586]: 2024-02-08T23:35:53.826361Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:35:53.827679 waagent[1586]: 2024-02-08T23:35:53.827619Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:35:53.902336 waagent[1586]: 2024-02-08T23:35:53.902222Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:35:53.902710 waagent[1586]: 2024-02-08T23:35:53.902628Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:35:53.910595 waagent[1586]: 2024-02-08T23:35:53.910543Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:35:53.911061 waagent[1586]: 2024-02-08T23:35:53.911003Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:35:53.912103 waagent[1586]: 2024-02-08T23:35:53.912040Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 8 23:35:53.913348 waagent[1586]: 2024-02-08T23:35:53.913290Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:35:53.913959 waagent[1586]: 2024-02-08T23:35:53.913902Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:35:53.914558 waagent[1586]: 2024-02-08T23:35:53.914498Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:35:53.914653 waagent[1586]: 2024-02-08T23:35:53.914597Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:35:53.915026 waagent[1586]: 2024-02-08T23:35:53.914971Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:35:53.915172 waagent[1586]: 2024-02-08T23:35:53.915127Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:35:53.915683 waagent[1586]: 2024-02-08T23:35:53.915615Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:35:53.916156 waagent[1586]: 2024-02-08T23:35:53.916098Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:35:53.916291 waagent[1586]: 2024-02-08T23:35:53.916215Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:35:53.916481 waagent[1586]: 2024-02-08T23:35:53.916436Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:35:53.917820 waagent[1586]: 2024-02-08T23:35:53.917760Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:35:53.918048 waagent[1586]: 2024-02-08T23:35:53.917999Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:35:53.918597 waagent[1586]: 2024-02-08T23:35:53.918539Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:35:53.919172 waagent[1586]: 2024-02-08T23:35:53.919119Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:35:53.919556 waagent[1586]: 2024-02-08T23:35:53.919502Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:35:53.919556 waagent[1586]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:35:53.919556 waagent[1586]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:35:53.919556 waagent[1586]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:35:53.919556 waagent[1586]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:35:53.919556 waagent[1586]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:35:53.919556 waagent[1586]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:35:53.927809 waagent[1586]: 2024-02-08T23:35:53.927404Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:35:53.930616 waagent[1586]: 2024-02-08T23:35:53.930565Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 8 23:35:53.932200 waagent[1586]: 2024-02-08T23:35:53.932151Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:35:53.936527 waagent[1586]: 2024-02-08T23:35:53.936469Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 8 23:35:53.964530 waagent[1586]: 2024-02-08T23:35:53.964421Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1576' Feb 8 23:35:53.978490 waagent[1586]: 2024-02-08T23:35:53.978419Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 8 23:35:54.081414 waagent[1586]: 2024-02-08T23:35:54.081277Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:35:54.081414 waagent[1586]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:35:54.081414 waagent[1586]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:35:54.081414 waagent[1586]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9d:1f:12 brd ff:ff:ff:ff:ff:ff Feb 8 23:35:54.081414 waagent[1586]: 3: enP2474s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9d:1f:12 brd ff:ff:ff:ff:ff:ff\ altname enP2474p0s2 Feb 8 23:35:54.081414 waagent[1586]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:35:54.081414 waagent[1586]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:35:54.081414 waagent[1586]: 2: eth0 inet 10.200.8.4/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:35:54.081414 waagent[1586]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:35:54.081414 waagent[1586]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:35:54.081414 waagent[1586]: 2: eth0 inet6 fe80::222:48ff:fe9d:1f12/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:35:54.233790 waagent[1586]: 2024-02-08T23:35:54.233714Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 8 23:35:54.380925 waagent[1517]: 2024-02-08T23:35:54.380768Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 8 23:35:54.387140 waagent[1517]: 2024-02-08T23:35:54.387078Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 8 23:35:55.445401 waagent[1624]: 2024-02-08T23:35:55.445288Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 8 23:35:55.446108 waagent[1624]: 2024-02-08T23:35:55.446047Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 8 23:35:55.446254 waagent[1624]: 2024-02-08T23:35:55.446201Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 8 23:35:55.455749 waagent[1624]: 2024-02-08T23:35:55.455631Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:35:55.456125 waagent[1624]: 2024-02-08T23:35:55.456068Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:35:55.456283 waagent[1624]: 2024-02-08T23:35:55.456234Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:35:55.467518 waagent[1624]: 2024-02-08T23:35:55.467445Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 8 23:35:55.475575 waagent[1624]: 2024-02-08T23:35:55.475514Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 8 23:35:55.476467 waagent[1624]: 2024-02-08T23:35:55.476407Z INFO ExtHandler Feb 8 23:35:55.476616 waagent[1624]: 2024-02-08T23:35:55.476566Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: af27fa00-5926-4c87-97b1-fc6e500a66ff eTag: 3509751656365190241 source: Fabric] Feb 8 23:35:55.477314 waagent[1624]: 2024-02-08T23:35:55.477256Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 8 23:35:55.478383 waagent[1624]: 2024-02-08T23:35:55.478323Z INFO ExtHandler Feb 8 23:35:55.478514 waagent[1624]: 2024-02-08T23:35:55.478465Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 8 23:35:55.485012 waagent[1624]: 2024-02-08T23:35:55.484953Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 8 23:35:55.485425 waagent[1624]: 2024-02-08T23:35:55.485376Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:35:55.506111 waagent[1624]: 2024-02-08T23:35:55.506038Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 8 23:35:55.568888 waagent[1624]: 2024-02-08T23:35:55.568758Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A39D3F0D13C9124B27D43908EB3DAC336508CFF5', 'hasPrivateKey': False} Feb 8 23:35:55.569860 waagent[1624]: 2024-02-08T23:35:55.569795Z INFO ExtHandler Downloaded certificate {'thumbprint': '8D6A134EEC45F6CEEBA7B11F708F64A7D9E19C87', 'hasPrivateKey': True} Feb 8 23:35:55.570811 waagent[1624]: 2024-02-08T23:35:55.570753Z INFO ExtHandler Fetch goal state completed Feb 8 23:35:55.590513 waagent[1624]: 2024-02-08T23:35:55.590438Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1624 Feb 8 23:35:55.593685 waagent[1624]: 2024-02-08T23:35:55.593620Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:35:55.595150 waagent[1624]: 2024-02-08T23:35:55.595093Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:35:55.599982 waagent[1624]: 2024-02-08T23:35:55.599928Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:35:55.600320 waagent[1624]: 2024-02-08T23:35:55.600265Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:35:55.608129 waagent[1624]: 2024-02-08T23:35:55.608077Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:35:55.608562 waagent[1624]: 2024-02-08T23:35:55.608507Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:35:55.614407 waagent[1624]: 2024-02-08T23:35:55.614309Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 8 23:35:55.618962 waagent[1624]: 2024-02-08T23:35:55.618905Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 8 23:35:55.620295 waagent[1624]: 2024-02-08T23:35:55.620236Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:35:55.620730 waagent[1624]: 2024-02-08T23:35:55.620656Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:35:55.621272 waagent[1624]: 2024-02-08T23:35:55.621219Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:35:55.621429 waagent[1624]: 2024-02-08T23:35:55.621380Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:35:55.621590 waagent[1624]: 2024-02-08T23:35:55.621524Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:35:55.622152 waagent[1624]: 2024-02-08T23:35:55.622098Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:35:55.622538 waagent[1624]: 2024-02-08T23:35:55.622481Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:35:55.622765 waagent[1624]: 2024-02-08T23:35:55.622697Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:35:55.623111 waagent[1624]: 2024-02-08T23:35:55.623057Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:35:55.623111 waagent[1624]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:35:55.623111 waagent[1624]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:35:55.623111 waagent[1624]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:35:55.623111 waagent[1624]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:35:55.623111 waagent[1624]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:35:55.623111 waagent[1624]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:35:55.623749 waagent[1624]: 2024-02-08T23:35:55.623695Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:35:55.627041 waagent[1624]: 2024-02-08T23:35:55.626967Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:35:55.627203 waagent[1624]: 2024-02-08T23:35:55.627153Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:35:55.627344 waagent[1624]: 2024-02-08T23:35:55.627299Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:35:55.627611 waagent[1624]: 2024-02-08T23:35:55.627543Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:35:55.628133 waagent[1624]: 2024-02-08T23:35:55.628076Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:35:55.640337 waagent[1624]: 2024-02-08T23:35:55.639181Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:35:55.645634 waagent[1624]: 2024-02-08T23:35:55.645573Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:35:55.645634 waagent[1624]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:35:55.645634 waagent[1624]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:35:55.645634 waagent[1624]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9d:1f:12 brd ff:ff:ff:ff:ff:ff Feb 8 23:35:55.645634 waagent[1624]: 3: enP2474s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9d:1f:12 brd ff:ff:ff:ff:ff:ff\ altname enP2474p0s2 Feb 8 23:35:55.645634 waagent[1624]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:35:55.645634 waagent[1624]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:35:55.645634 waagent[1624]: 2: eth0 inet 10.200.8.4/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:35:55.645634 waagent[1624]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:35:55.645634 waagent[1624]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:35:55.645634 waagent[1624]: 2: eth0 inet6 fe80::222:48ff:fe9d:1f12/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:35:55.661045 waagent[1624]: 2024-02-08T23:35:55.660955Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 8 23:35:55.662211 waagent[1624]: 2024-02-08T23:35:55.662156Z INFO ExtHandler ExtHandler Downloading manifest Feb 8 23:35:55.734548 waagent[1624]: 2024-02-08T23:35:55.734485Z INFO ExtHandler ExtHandler Feb 8 23:35:55.735953 waagent[1624]: 2024-02-08T23:35:55.735889Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 135ebb86-beeb-4929-be8e-b9b1c5889c86 correlation e5c79aef-fc1f-4bd3-bcdc-908f9c8754bb created: 2024-02-08T23:34:21.284531Z] Feb 8 23:35:55.739840 waagent[1624]: 2024-02-08T23:35:55.739769Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 8 23:35:55.744822 waagent[1624]: 2024-02-08T23:35:55.744705Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 10 ms] Feb 8 23:35:55.760107 waagent[1624]: 2024-02-08T23:35:55.760001Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 8 23:35:55.760107 waagent[1624]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:35:55.760107 waagent[1624]: pkts bytes target prot opt in out source destination Feb 8 23:35:55.760107 waagent[1624]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:35:55.760107 waagent[1624]: pkts bytes target prot opt in out source destination Feb 8 23:35:55.760107 waagent[1624]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:35:55.760107 waagent[1624]: pkts bytes target prot opt in out source destination Feb 8 23:35:55.760107 waagent[1624]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 8 23:35:55.760107 waagent[1624]: 13 4435 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:35:55.760107 waagent[1624]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:35:55.767237 waagent[1624]: 2024-02-08T23:35:55.767130Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 8 23:35:55.767237 waagent[1624]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:35:55.767237 waagent[1624]: pkts bytes target prot opt in out source destination Feb 8 23:35:55.767237 waagent[1624]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:35:55.767237 waagent[1624]: pkts bytes target prot opt in out source destination Feb 8 23:35:55.767237 waagent[1624]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:35:55.767237 waagent[1624]: pkts bytes target prot opt in out source destination Feb 8 23:35:55.767237 waagent[1624]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 8 23:35:55.767237 waagent[1624]: 13 4435 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:35:55.767237 waagent[1624]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:35:55.768914 waagent[1624]: 2024-02-08T23:35:55.768854Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 8 23:35:55.773070 waagent[1624]: 2024-02-08T23:35:55.772944Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 8 23:35:55.778958 waagent[1624]: 2024-02-08T23:35:55.778888Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: FAA6D5C5-BFD5-48C8-82E2-A142B23AB45D;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 8 23:36:21.868307 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 8 23:36:29.625304 systemd[1]: Created slice system-sshd.slice. Feb 8 23:36:29.627247 systemd[1]: Started sshd@0-10.200.8.4:22-10.200.12.6:50722.service. Feb 8 23:36:30.495792 sshd[1671]: Accepted publickey for core from 10.200.12.6 port 50722 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:36:30.497428 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:36:30.502735 systemd-logind[1392]: New session 3 of user core. Feb 8 23:36:30.503349 systemd[1]: Started session-3.scope. Feb 8 23:36:30.556465 update_engine[1395]: I0208 23:36:30.556391 1395 update_attempter.cc:509] Updating boot flags... Feb 8 23:36:31.034254 systemd[1]: Started sshd@1-10.200.8.4:22-10.200.12.6:50726.service. Feb 8 23:36:31.670892 sshd[1769]: Accepted publickey for core from 10.200.12.6 port 50726 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:36:31.672478 sshd[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:36:31.677190 systemd[1]: Started session-4.scope. Feb 8 23:36:31.677928 systemd-logind[1392]: New session 4 of user core. Feb 8 23:36:32.123082 sshd[1769]: pam_unix(sshd:session): session closed for user core Feb 8 23:36:32.126457 systemd[1]: sshd@1-10.200.8.4:22-10.200.12.6:50726.service: Deactivated successfully. Feb 8 23:36:32.128379 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:36:32.129378 systemd-logind[1392]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:36:32.130540 systemd-logind[1392]: Removed session 4. Feb 8 23:36:32.226267 systemd[1]: Started sshd@2-10.200.8.4:22-10.200.12.6:50730.service. Feb 8 23:36:32.847604 sshd[1776]: Accepted publickey for core from 10.200.12.6 port 50730 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:36:32.849283 sshd[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:36:32.854572 systemd[1]: Started session-5.scope. Feb 8 23:36:32.854847 systemd-logind[1392]: New session 5 of user core. Feb 8 23:36:33.285520 sshd[1776]: pam_unix(sshd:session): session closed for user core Feb 8 23:36:33.288968 systemd[1]: sshd@2-10.200.8.4:22-10.200.12.6:50730.service: Deactivated successfully. Feb 8 23:36:33.290495 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:36:33.290535 systemd-logind[1392]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:36:33.291995 systemd-logind[1392]: Removed session 5. Feb 8 23:36:33.387435 systemd[1]: Started sshd@3-10.200.8.4:22-10.200.12.6:50734.service. Feb 8 23:36:34.012721 sshd[1783]: Accepted publickey for core from 10.200.12.6 port 50734 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:36:34.015016 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:36:34.020175 systemd[1]: Started session-6.scope. Feb 8 23:36:34.020589 systemd-logind[1392]: New session 6 of user core. Feb 8 23:36:34.452212 sshd[1783]: pam_unix(sshd:session): session closed for user core Feb 8 23:36:34.455619 systemd[1]: sshd@3-10.200.8.4:22-10.200.12.6:50734.service: Deactivated successfully. Feb 8 23:36:34.457030 systemd[1]: session-6.scope: Deactivated successfully. Feb 8 23:36:34.457045 systemd-logind[1392]: Session 6 logged out. Waiting for processes to exit. Feb 8 23:36:34.458212 systemd-logind[1392]: Removed session 6. Feb 8 23:36:34.555799 systemd[1]: Started sshd@4-10.200.8.4:22-10.200.12.6:50736.service. Feb 8 23:36:35.179341 sshd[1790]: Accepted publickey for core from 10.200.12.6 port 50736 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:36:35.180987 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:36:35.186771 systemd[1]: Started session-7.scope. Feb 8 23:36:35.187081 systemd-logind[1392]: New session 7 of user core. Feb 8 23:36:35.768888 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 8 23:36:35.769244 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:36:35.777865 dbus-daemon[1373]: \xd0\xdd\xd3\xd5\xc7U: received setenforce notice (enforcing=-1439341600) Feb 8 23:36:35.779615 sudo[1794]: pam_unix(sudo:session): session closed for user root Feb 8 23:36:35.894600 sshd[1790]: pam_unix(sshd:session): session closed for user core Feb 8 23:36:35.898312 systemd[1]: sshd@4-10.200.8.4:22-10.200.12.6:50736.service: Deactivated successfully. Feb 8 23:36:35.900298 systemd[1]: session-7.scope: Deactivated successfully. Feb 8 23:36:35.901030 systemd-logind[1392]: Session 7 logged out. Waiting for processes to exit. Feb 8 23:36:35.902593 systemd-logind[1392]: Removed session 7. Feb 8 23:36:35.997901 systemd[1]: Started sshd@5-10.200.8.4:22-10.200.12.6:50742.service. Feb 8 23:36:36.619069 sshd[1798]: Accepted publickey for core from 10.200.12.6 port 50742 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:36:36.620757 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:36:36.625836 systemd[1]: Started session-8.scope. Feb 8 23:36:36.626079 systemd-logind[1392]: New session 8 of user core. Feb 8 23:36:36.961781 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 8 23:36:36.962053 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:36:36.964781 sudo[1803]: pam_unix(sudo:session): session closed for user root Feb 8 23:36:36.969128 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 8 23:36:36.969385 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:36:36.978003 systemd[1]: Stopping audit-rules.service... Feb 8 23:36:36.978000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 8 23:36:36.979714 auditctl[1806]: No rules Feb 8 23:36:36.980109 systemd[1]: audit-rules.service: Deactivated successfully. Feb 8 23:36:36.980327 systemd[1]: Stopped audit-rules.service. Feb 8 23:36:36.982240 systemd[1]: Starting audit-rules.service... Feb 8 23:36:36.982557 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 8 23:36:36.982597 kernel: audit: type=1305 audit(1707435396.978:139): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 8 23:36:36.978000 audit[1806]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdaabeace0 a2=420 a3=0 items=0 ppid=1 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:37.002969 kernel: audit: type=1300 audit(1707435396.978:139): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdaabeace0 a2=420 a3=0 items=0 ppid=1 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:37.003036 kernel: audit: type=1327 audit(1707435396.978:139): proctitle=2F7362696E2F617564697463746C002D44 Feb 8 23:36:36.978000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 8 23:36:36.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:36:37.009317 augenrules[1824]: No rules Feb 8 23:36:37.010168 systemd[1]: Finished audit-rules.service. Feb 8 23:36:37.011108 sudo[1802]: pam_unix(sudo:session): session closed for user root Feb 8 23:36:37.016351 kernel: audit: type=1131 audit(1707435396.978:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:36:37.016415 kernel: audit: type=1130 audit(1707435397.006:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:36:37.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:36:37.025689 kernel: audit: type=1106 audit(1707435397.006:142): pid=1802 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:36:37.006000 audit[1802]: USER_END pid=1802 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:36:37.006000 audit[1802]: CRED_DISP pid=1802 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:36:37.047291 kernel: audit: type=1104 audit(1707435397.006:143): pid=1802 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:36:37.121975 sshd[1798]: pam_unix(sshd:session): session closed for user core Feb 8 23:36:37.122000 audit[1798]: USER_END pid=1798 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:36:37.125276 systemd[1]: sshd@5-10.200.8.4:22-10.200.12.6:50742.service: Deactivated successfully. Feb 8 23:36:37.126213 systemd[1]: session-8.scope: Deactivated successfully. Feb 8 23:36:37.127683 systemd-logind[1392]: Session 8 logged out. Waiting for processes to exit. Feb 8 23:36:37.128548 systemd-logind[1392]: Removed session 8. Feb 8 23:36:37.123000 audit[1798]: CRED_DISP pid=1798 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:36:37.148010 kernel: audit: type=1106 audit(1707435397.122:144): pid=1798 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:36:37.148082 kernel: audit: type=1104 audit(1707435397.123:145): pid=1798 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:36:37.148111 kernel: audit: type=1131 audit(1707435397.124:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.4:22-10.200.12.6:50742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:36:37.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.4:22-10.200.12.6:50742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:36:37.222505 systemd[1]: Started sshd@6-10.200.8.4:22-10.200.12.6:51780.service. Feb 8 23:36:37.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.4:22-10.200.12.6:51780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:36:37.835000 audit[1831]: USER_ACCT pid=1831 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:36:37.836877 sshd[1831]: Accepted publickey for core from 10.200.12.6 port 51780 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:36:37.837000 audit[1831]: CRED_ACQ pid=1831 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:36:37.837000 audit[1831]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcabda0a50 a2=3 a3=0 items=0 ppid=1 pid=1831 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:37.837000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:36:37.838573 sshd[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:36:37.844319 systemd[1]: Started session-9.scope. Feb 8 23:36:37.844625 systemd-logind[1392]: New session 9 of user core. Feb 8 23:36:37.848000 audit[1831]: USER_START pid=1831 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:36:37.850000 audit[1834]: CRED_ACQ pid=1834 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:36:38.173000 audit[1835]: USER_ACCT pid=1835 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:36:38.174725 sudo[1835]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:36:38.174000 audit[1835]: CRED_REFR pid=1835 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:36:38.174993 sudo[1835]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:36:38.176000 audit[1835]: USER_START pid=1835 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:36:39.135940 systemd[1]: Starting docker.service... Feb 8 23:36:39.192338 env[1850]: time="2024-02-08T23:36:39.192279602Z" level=info msg="Starting up" Feb 8 23:36:39.194680 env[1850]: time="2024-02-08T23:36:39.194640307Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:36:39.194785 env[1850]: time="2024-02-08T23:36:39.194676907Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:36:39.194785 env[1850]: time="2024-02-08T23:36:39.194705307Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc Feb 8 23:36:39.194785 env[1850]: time="2024-02-08T23:36:39.194722207Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:36:39.199458 env[1850]: time="2024-02-08T23:36:39.199439618Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:36:39.199559 env[1850]: time="2024-02-08T23:36:39.199548218Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:36:39.199612 env[1850]: time="2024-02-08T23:36:39.199601618Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc Feb 8 23:36:39.199653 env[1850]: time="2024-02-08T23:36:39.199645318Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:36:39.283087 env[1850]: time="2024-02-08T23:36:39.283050001Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 8 23:36:39.283087 env[1850]: time="2024-02-08T23:36:39.283075102Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 8 23:36:39.283361 env[1850]: time="2024-02-08T23:36:39.283281702Z" level=info msg="Loading containers: start." Feb 8 23:36:39.319000 audit[1878]: NETFILTER_CFG table=nat:8 family=2 entries=2 op=nft_register_chain pid=1878 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.319000 audit[1878]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd1bcd04f0 a2=0 a3=7ffd1bcd04dc items=0 ppid=1850 pid=1878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.319000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 8 23:36:39.321000 audit[1880]: NETFILTER_CFG table=filter:9 family=2 entries=2 op=nft_register_chain pid=1880 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.321000 audit[1880]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff012a4770 a2=0 a3=7fff012a475c items=0 ppid=1850 pid=1880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.321000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 8 23:36:39.322000 audit[1882]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=1882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.322000 audit[1882]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe9a911c50 a2=0 a3=7ffe9a911c3c items=0 ppid=1850 pid=1882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.322000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 8 23:36:39.324000 audit[1884]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=1884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.324000 audit[1884]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc7ac1f840 a2=0 a3=7ffc7ac1f82c items=0 ppid=1850 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.324000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 8 23:36:39.326000 audit[1886]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.326000 audit[1886]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffda8790300 a2=0 a3=7ffda87902ec items=0 ppid=1850 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.326000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 8 23:36:39.328000 audit[1888]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.328000 audit[1888]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd38efe120 a2=0 a3=7ffd38efe10c items=0 ppid=1850 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.328000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 8 23:36:39.341000 audit[1890]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.341000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffdc02cb00 a2=0 a3=7fffdc02caec items=0 ppid=1850 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.341000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 8 23:36:39.343000 audit[1892]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.343000 audit[1892]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcbc6eb340 a2=0 a3=7ffcbc6eb32c items=0 ppid=1850 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.343000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 8 23:36:39.344000 audit[1894]: NETFILTER_CFG table=filter:16 family=2 entries=2 op=nft_register_chain pid=1894 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.344000 audit[1894]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffcb7ea6080 a2=0 a3=7ffcb7ea606c items=0 ppid=1850 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.344000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 8 23:36:39.358000 audit[1898]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_unregister_rule pid=1898 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.358000 audit[1898]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe1bd83c50 a2=0 a3=7ffe1bd83c3c items=0 ppid=1850 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.358000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 8 23:36:39.359000 audit[1899]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_rule pid=1899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.359000 audit[1899]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdccad0f00 a2=0 a3=7ffdccad0eec items=0 ppid=1850 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.359000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 8 23:36:39.385732 kernel: Initializing XFRM netlink socket Feb 8 23:36:39.411962 env[1850]: time="2024-02-08T23:36:39.410129081Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 8 23:36:39.487000 audit[1907]: NETFILTER_CFG table=nat:19 family=2 entries=2 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.487000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffd09f27370 a2=0 a3=7ffd09f2735c items=0 ppid=1850 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.487000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 8 23:36:39.498000 audit[1910]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_rule pid=1910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.498000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd11f05580 a2=0 a3=7ffd11f0556c items=0 ppid=1850 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.498000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 8 23:36:39.501000 audit[1913]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.501000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe90ebf1e0 a2=0 a3=7ffe90ebf1cc items=0 ppid=1850 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.501000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 8 23:36:39.503000 audit[1915]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.503000 audit[1915]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd48059c30 a2=0 a3=7ffd48059c1c items=0 ppid=1850 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.503000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 8 23:36:39.505000 audit[1917]: NETFILTER_CFG table=nat:23 family=2 entries=2 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.505000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffc69ac6bc0 a2=0 a3=7ffc69ac6bac items=0 ppid=1850 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.505000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 8 23:36:39.507000 audit[1919]: NETFILTER_CFG table=nat:24 family=2 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.507000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff7ff0ad60 a2=0 a3=7fff7ff0ad4c items=0 ppid=1850 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.507000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 8 23:36:39.509000 audit[1921]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.509000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffc7f8f3140 a2=0 a3=7ffc7f8f312c items=0 ppid=1850 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.509000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 8 23:36:39.511000 audit[1923]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.511000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffce038ad40 a2=0 a3=7ffce038ad2c items=0 ppid=1850 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.511000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 8 23:36:39.513000 audit[1925]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.513000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff1351df30 a2=0 a3=7fff1351df1c items=0 ppid=1850 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.513000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 8 23:36:39.515000 audit[1927]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_rule pid=1927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.515000 audit[1927]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd1d9c74d0 a2=0 a3=7ffd1d9c74bc items=0 ppid=1850 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.515000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 8 23:36:39.517000 audit[1929]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.517000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd35a154a0 a2=0 a3=7ffd35a1548c items=0 ppid=1850 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.517000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 8 23:36:39.518762 systemd-networkd[1576]: docker0: Link UP Feb 8 23:36:39.532000 audit[1933]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_unregister_rule pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.532000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff389786e0 a2=0 a3=7fff389786cc items=0 ppid=1850 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.532000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 8 23:36:39.533000 audit[1934]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=1934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:36:39.533000 audit[1934]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe0a3a78d0 a2=0 a3=7ffe0a3a78bc items=0 ppid=1850 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:36:39.533000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 8 23:36:39.534703 env[1850]: time="2024-02-08T23:36:39.534633854Z" level=info msg="Loading containers: done." Feb 8 23:36:39.546538 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2097058922-merged.mount: Deactivated successfully. Feb 8 23:36:39.600148 env[1850]: time="2024-02-08T23:36:39.600092198Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 8 23:36:39.600391 env[1850]: time="2024-02-08T23:36:39.600355199Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 8 23:36:39.600528 env[1850]: time="2024-02-08T23:36:39.600503899Z" level=info msg="Daemon has completed initialization" Feb 8 23:36:39.624339 systemd[1]: Started docker.service. Feb 8 23:36:39.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:36:39.632074 env[1850]: time="2024-02-08T23:36:39.632028768Z" level=info msg="API listen on /run/docker.sock" Feb 8 23:36:39.650085 systemd[1]: Reloading. Feb 8 23:36:39.730995 /usr/lib/systemd/system-generators/torcx-generator[1982]: time="2024-02-08T23:36:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:36:39.731450 /usr/lib/systemd/system-generators/torcx-generator[1982]: time="2024-02-08T23:36:39Z" level=info msg="torcx already run" Feb 8 23:36:39.821268 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:36:39.821287 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:36:39.839271 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:36:39.918182 systemd[1]: Started kubelet.service. Feb 8 23:36:39.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:36:39.989376 kubelet[2050]: E0208 23:36:39.988577 2050 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:36:39.990397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:36:39.990615 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:36:39.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 8 23:36:43.922204 env[1405]: time="2024-02-08T23:36:43.922142840Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 8 23:36:44.538289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930654506.mount: Deactivated successfully. Feb 8 23:36:46.566329 env[1405]: time="2024-02-08T23:36:46.566273478Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:46.570657 env[1405]: time="2024-02-08T23:36:46.570621288Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:46.574752 env[1405]: time="2024-02-08T23:36:46.574717891Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:46.578480 env[1405]: time="2024-02-08T23:36:46.578450485Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:46.579077 env[1405]: time="2024-02-08T23:36:46.579044500Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 8 23:36:46.588651 env[1405]: time="2024-02-08T23:36:46.588624741Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 8 23:36:48.663728 env[1405]: time="2024-02-08T23:36:48.663654515Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:48.669336 env[1405]: time="2024-02-08T23:36:48.669296750Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:48.673134 env[1405]: time="2024-02-08T23:36:48.673105540Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:48.677655 env[1405]: time="2024-02-08T23:36:48.677627348Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:48.678274 env[1405]: time="2024-02-08T23:36:48.678240763Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 8 23:36:48.688763 env[1405]: time="2024-02-08T23:36:48.688724013Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 8 23:36:49.857353 env[1405]: time="2024-02-08T23:36:49.857300598Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:49.863136 env[1405]: time="2024-02-08T23:36:49.863098532Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:49.866811 env[1405]: time="2024-02-08T23:36:49.866780218Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:49.872269 env[1405]: time="2024-02-08T23:36:49.872241544Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:49.872892 env[1405]: time="2024-02-08T23:36:49.872861059Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 8 23:36:49.882506 env[1405]: time="2024-02-08T23:36:49.882482382Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 8 23:36:50.202479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 8 23:36:50.215999 kernel: kauditd_printk_skb: 86 callbacks suppressed Feb 8 23:36:50.216083 kernel: audit: type=1130 audit(1707435410.202:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:36:50.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:36:50.202827 systemd[1]: Stopped kubelet.service. Feb 8 23:36:50.205145 systemd[1]: Started kubelet.service. Feb 8 23:36:50.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:36:50.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:36:50.241702 kernel: audit: type=1131 audit(1707435410.202:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:36:50.241791 kernel: audit: type=1130 audit(1707435410.202:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:36:50.263451 kubelet[2082]: E0208 23:36:50.263394 2082 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:36:50.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 8 23:36:50.266819 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:36:50.267027 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:36:50.279687 kernel: audit: type=1131 audit(1707435410.266:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 8 23:36:50.951759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount788100846.mount: Deactivated successfully. Feb 8 23:36:51.426255 env[1405]: time="2024-02-08T23:36:51.426130700Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:51.430465 env[1405]: time="2024-02-08T23:36:51.430427694Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:51.432953 env[1405]: time="2024-02-08T23:36:51.432926049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:51.437685 env[1405]: time="2024-02-08T23:36:51.437640053Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:51.438085 env[1405]: time="2024-02-08T23:36:51.438055162Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 8 23:36:51.447420 env[1405]: time="2024-02-08T23:36:51.447395867Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 8 23:36:51.932735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2226494099.mount: Deactivated successfully. Feb 8 23:36:51.947397 env[1405]: time="2024-02-08T23:36:51.947353234Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:51.953229 env[1405]: time="2024-02-08T23:36:51.953194662Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:51.957396 env[1405]: time="2024-02-08T23:36:51.957366754Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:51.961445 env[1405]: time="2024-02-08T23:36:51.961416443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:51.961865 env[1405]: time="2024-02-08T23:36:51.961835452Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 8 23:36:51.971515 env[1405]: time="2024-02-08T23:36:51.971489964Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 8 23:36:52.714624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount865261276.mount: Deactivated successfully. Feb 8 23:36:56.975612 env[1405]: time="2024-02-08T23:36:56.975558917Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:56.981914 env[1405]: time="2024-02-08T23:36:56.981877238Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:56.985805 env[1405]: time="2024-02-08T23:36:56.985771013Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:56.989308 env[1405]: time="2024-02-08T23:36:56.989276180Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:56.989847 env[1405]: time="2024-02-08T23:36:56.989816191Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 8 23:36:56.999617 env[1405]: time="2024-02-08T23:36:56.999591378Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 8 23:36:57.488782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3534957683.mount: Deactivated successfully. Feb 8 23:36:58.172004 env[1405]: time="2024-02-08T23:36:58.170949753Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:58.302637 env[1405]: time="2024-02-08T23:36:58.302574245Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:58.750331 env[1405]: time="2024-02-08T23:36:58.750278581Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:58.754455 env[1405]: time="2024-02-08T23:36:58.754411956Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:58.755084 env[1405]: time="2024-02-08T23:36:58.755041967Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 8 23:37:00.452492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 8 23:37:00.452767 systemd[1]: Stopped kubelet.service. Feb 8 23:37:00.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:00.469757 kernel: audit: type=1130 audit(1707435420.452:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:00.470022 systemd[1]: Started kubelet.service. Feb 8 23:37:00.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:00.497893 kernel: audit: type=1131 audit(1707435420.452:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:00.497993 kernel: audit: type=1130 audit(1707435420.469:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:00.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:00.572248 kubelet[2157]: E0208 23:37:00.572193 2157 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:37:00.590202 kernel: audit: type=1131 audit(1707435420.574:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 8 23:37:00.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 8 23:37:00.574724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:37:00.574916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:37:01.534050 systemd[1]: Stopped kubelet.service. Feb 8 23:37:01.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:01.561777 kernel: audit: type=1130 audit(1707435421.533:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:01.561916 kernel: audit: type=1131 audit(1707435421.536:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:01.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:01.556443 systemd[1]: Reloading. Feb 8 23:37:01.634225 /usr/lib/systemd/system-generators/torcx-generator[2187]: time="2024-02-08T23:37:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:37:01.634262 /usr/lib/systemd/system-generators/torcx-generator[2187]: time="2024-02-08T23:37:01Z" level=info msg="torcx already run" Feb 8 23:37:01.724786 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:37:01.724805 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:37:01.742734 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:37:01.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:01.829254 systemd[1]: Started kubelet.service. Feb 8 23:37:01.844694 kernel: audit: type=1130 audit(1707435421.828:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:01.897416 kubelet[2256]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:37:01.897416 kubelet[2256]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:37:01.897919 kubelet[2256]: I0208 23:37:01.897480 2256 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:37:01.898893 kubelet[2256]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:37:01.898893 kubelet[2256]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:37:02.325227 kubelet[2256]: I0208 23:37:02.325192 2256 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:37:02.325415 kubelet[2256]: I0208 23:37:02.325393 2256 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:37:02.325682 kubelet[2256]: I0208 23:37:02.325650 2256 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:37:02.328839 kubelet[2256]: E0208 23:37:02.328819 2256 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:02.329019 kubelet[2256]: I0208 23:37:02.329005 2256 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:37:02.331660 kubelet[2256]: I0208 23:37:02.331621 2256 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:37:02.332022 kubelet[2256]: I0208 23:37:02.332000 2256 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:37:02.332106 kubelet[2256]: I0208 23:37:02.332084 2256 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:37:02.332233 kubelet[2256]: I0208 23:37:02.332123 2256 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:37:02.332233 kubelet[2256]: I0208 23:37:02.332140 2256 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:37:02.332326 kubelet[2256]: I0208 23:37:02.332263 2256 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:37:02.335124 kubelet[2256]: I0208 23:37:02.335104 2256 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:37:02.335214 kubelet[2256]: I0208 23:37:02.335136 2256 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:37:02.335214 kubelet[2256]: I0208 23:37:02.335170 2256 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:37:02.335214 kubelet[2256]: I0208 23:37:02.335190 2256 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:37:02.336223 kubelet[2256]: W0208 23:37:02.336182 2256 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:02.336384 kubelet[2256]: E0208 23:37:02.336371 2256 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:02.336840 kubelet[2256]: I0208 23:37:02.336824 2256 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:37:02.337373 kubelet[2256]: W0208 23:37:02.337357 2256 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:37:02.338076 kubelet[2256]: I0208 23:37:02.338059 2256 server.go:1186] "Started kubelet" Feb 8 23:37:02.339575 kubelet[2256]: W0208 23:37:02.339531 2256 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-9933156126&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:02.339673 kubelet[2256]: E0208 23:37:02.339591 2256 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-9933156126&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:02.341002 kubelet[2256]: E0208 23:37:02.340892 2256 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-9933156126.17b207786160a6a0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-9933156126", UID:"ci-3510.3.2-a-9933156126", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-9933156126"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 338033312, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 338033312, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.4:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.4:6443: connect: connection refused'(may retry after sleeping) Feb 8 23:37:02.341167 kubelet[2256]: I0208 23:37:02.341151 2256 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:37:02.343019 kubelet[2256]: I0208 23:37:02.342997 2256 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:37:02.347914 kubelet[2256]: E0208 23:37:02.347895 2256 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:37:02.348001 kubelet[2256]: E0208 23:37:02.347923 2256 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:37:02.348000 audit[2256]: AVC avc: denied { mac_admin } for pid=2256 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:37:02.349080 kubelet[2256]: I0208 23:37:02.349067 2256 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 8 23:37:02.349164 kubelet[2256]: I0208 23:37:02.349156 2256 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 8 23:37:02.349281 kubelet[2256]: I0208 23:37:02.349273 2256 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:37:02.351389 kubelet[2256]: I0208 23:37:02.351376 2256 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:37:02.351525 kubelet[2256]: I0208 23:37:02.351513 2256 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:37:02.362682 kernel: audit: type=1400 audit(1707435422.348:194): avc: denied { mac_admin } for pid=2256 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:37:02.348000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:37:02.370682 kernel: audit: type=1401 audit(1707435422.348:194): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:37:02.348000 audit[2256]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000db7f80 a1=c001066c48 a2=c000db7f50 a3=25 items=0 ppid=1 pid=2256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.386867 kubelet[2256]: E0208 23:37:02.377835 2256 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-9933156126?timeout=10s": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:02.386867 kubelet[2256]: W0208 23:37:02.377901 2256 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:02.386867 kubelet[2256]: E0208 23:37:02.377937 2256 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:02.348000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:37:02.348000 audit[2256]: AVC avc: denied { mac_admin } for pid=2256 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:37:02.348000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:37:02.348000 audit[2256]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0002e6b80 a1=c001066c60 a2=c0002f6a50 a3=25 items=0 ppid=1 pid=2256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.348000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:37:02.376000 audit[2266]: NETFILTER_CFG table=mangle:32 family=2 entries=2 op=nft_register_chain pid=2266 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.376000 audit[2266]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeb18a0390 a2=0 a3=7ffeb18a037c items=0 ppid=2256 pid=2266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.376000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 8 23:37:02.381000 audit[2267]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_chain pid=2267 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.381000 audit[2267]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd39efa580 a2=0 a3=7ffd39efa56c items=0 ppid=2256 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.390872 kernel: audit: type=1300 audit(1707435422.348:194): arch=c000003e syscall=188 success=no exit=-22 a0=c000db7f80 a1=c001066c48 a2=c000db7f50 a3=25 items=0 ppid=1 pid=2256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.381000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 8 23:37:02.392494 kubelet[2256]: E0208 23:37:02.392396 2256 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-9933156126.17b207786160a6a0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-9933156126", UID:"ci-3510.3.2-a-9933156126", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-9933156126"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 338033312, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 338033312, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.4:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.4:6443: connect: connection refused'(may retry after sleeping) Feb 8 23:37:02.393000 audit[2271]: NETFILTER_CFG table=filter:34 family=2 entries=2 op=nft_register_chain pid=2271 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.393000 audit[2271]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdb4d4e270 a2=0 a3=7ffdb4d4e25c items=0 ppid=2256 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.393000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 8 23:37:02.397000 audit[2274]: NETFILTER_CFG table=filter:35 family=2 entries=2 op=nft_register_chain pid=2274 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.397000 audit[2274]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd767abde0 a2=0 a3=7ffd767abdcc items=0 ppid=2256 pid=2274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.397000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 8 23:37:02.405000 audit[2278]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2278 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.405000 audit[2278]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcd894bff0 a2=0 a3=7ffcd894bfdc items=0 ppid=2256 pid=2278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.405000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 8 23:37:02.406000 audit[2279]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2279 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.406000 audit[2279]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff90ad3040 a2=0 a3=7fff90ad302c items=0 ppid=2256 pid=2279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.406000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 8 23:37:02.431640 kubelet[2256]: I0208 23:37:02.431615 2256 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:37:02.431640 kubelet[2256]: I0208 23:37:02.431634 2256 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:37:02.431822 kubelet[2256]: I0208 23:37:02.431651 2256 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:37:02.436896 kubelet[2256]: I0208 23:37:02.436869 2256 policy_none.go:49] "None policy: Start" Feb 8 23:37:02.437410 kubelet[2256]: I0208 23:37:02.437385 2256 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:37:02.437496 kubelet[2256]: I0208 23:37:02.437414 2256 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:37:02.445145 kubelet[2256]: I0208 23:37:02.445110 2256 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:37:02.444000 audit[2256]: AVC avc: denied { mac_admin } for pid=2256 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:37:02.444000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:37:02.444000 audit[2256]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c001244b10 a1=c001145cf8 a2=c001244ae0 a3=25 items=0 ppid=1 pid=2256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.444000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:37:02.445476 kubelet[2256]: I0208 23:37:02.445213 2256 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 8 23:37:02.445476 kubelet[2256]: I0208 23:37:02.445397 2256 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:37:02.447647 kubelet[2256]: E0208 23:37:02.447630 2256 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-9933156126\" not found" Feb 8 23:37:02.453705 kubelet[2256]: I0208 23:37:02.453573 2256 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:02.454193 kubelet[2256]: E0208 23:37:02.454178 2256 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:02.453000 audit[2284]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2284 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.453000 audit[2284]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffdb509cc00 a2=0 a3=7ffdb509cbec items=0 ppid=2256 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.453000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 8 23:37:02.474000 audit[2287]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2287 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.474000 audit[2287]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffd5f3584c0 a2=0 a3=7ffd5f3584ac items=0 ppid=2256 pid=2287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.474000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 8 23:37:02.475000 audit[2288]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2288 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.475000 audit[2288]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd203c5940 a2=0 a3=7ffd203c592c items=0 ppid=2256 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.475000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 8 23:37:02.476000 audit[2289]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_chain pid=2289 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.476000 audit[2289]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe12994dd0 a2=0 a3=7ffe12994dbc items=0 ppid=2256 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.476000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 8 23:37:02.478000 audit[2291]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_rule pid=2291 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.478000 audit[2291]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd05928ad0 a2=0 a3=7ffd05928abc items=0 ppid=2256 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.478000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 8 23:37:02.480000 audit[2293]: NETFILTER_CFG table=nat:43 family=2 entries=1 op=nft_register_rule pid=2293 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.480000 audit[2293]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc941a4470 a2=0 a3=7ffc941a445c items=0 ppid=2256 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.480000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 8 23:37:02.483000 audit[2295]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_rule pid=2295 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.483000 audit[2295]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffefe79d710 a2=0 a3=7ffefe79d6fc items=0 ppid=2256 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.483000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 8 23:37:02.485000 audit[2297]: NETFILTER_CFG table=nat:45 family=2 entries=1 op=nft_register_rule pid=2297 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.485000 audit[2297]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffca4c300c0 a2=0 a3=7ffca4c300ac items=0 ppid=2256 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.485000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 8 23:37:02.487000 audit[2299]: NETFILTER_CFG table=nat:46 family=2 entries=1 op=nft_register_rule pid=2299 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.487000 audit[2299]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffc700fbba0 a2=0 a3=7ffc700fbb8c items=0 ppid=2256 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.487000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 8 23:37:02.488519 kubelet[2256]: I0208 23:37:02.488496 2256 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:37:02.488000 audit[2300]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2300 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:02.488000 audit[2300]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc3bf340c0 a2=0 a3=7ffc3bf340ac items=0 ppid=2256 pid=2300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.488000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 8 23:37:02.489000 audit[2301]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2301 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.489000 audit[2301]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc33e2a090 a2=0 a3=7ffc33e2a07c items=0 ppid=2256 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.489000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 8 23:37:02.490000 audit[2302]: NETFILTER_CFG table=nat:49 family=10 entries=2 op=nft_register_chain pid=2302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:02.490000 audit[2302]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd18be0b00 a2=0 a3=7ffd18be0aec items=0 ppid=2256 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.490000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 8 23:37:02.490000 audit[2303]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2303 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.490000 audit[2303]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3a187190 a2=0 a3=7ffd3a18717c items=0 ppid=2256 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.490000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 8 23:37:02.491000 audit[2305]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2305 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:02.491000 audit[2305]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe5bfe850 a2=0 a3=7fffe5bfe83c items=0 ppid=2256 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.491000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 8 23:37:02.493000 audit[2306]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_rule pid=2306 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:02.493000 audit[2306]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffefa48d50 a2=0 a3=7fffefa48d3c items=0 ppid=2256 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.493000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 8 23:37:02.493000 audit[2307]: NETFILTER_CFG table=filter:53 family=10 entries=2 op=nft_register_chain pid=2307 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:02.493000 audit[2307]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7fff52ff7020 a2=0 a3=7fff52ff700c items=0 ppid=2256 pid=2307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.493000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 8 23:37:02.496000 audit[2309]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=2309 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:02.496000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffcc4219990 a2=0 a3=7ffcc421997c items=0 ppid=2256 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.496000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 8 23:37:02.497000 audit[2310]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=2310 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:02.497000 audit[2310]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe45c136d0 a2=0 a3=7ffe45c136bc items=0 ppid=2256 pid=2310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.497000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 8 23:37:02.498000 audit[2311]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=2311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:02.498000 audit[2311]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2744ed00 a2=0 a3=7fff2744ecec items=0 ppid=2256 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.498000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 8 23:37:02.500000 audit[2313]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_rule pid=2313 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:02.500000 audit[2313]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd1fe3d470 a2=0 a3=7ffd1fe3d45c items=0 ppid=2256 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.500000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 8 23:37:02.502000 audit[2315]: NETFILTER_CFG table=nat:58 family=10 entries=2 op=nft_register_chain pid=2315 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:02.502000 audit[2315]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd4674f580 a2=0 a3=7ffd4674f56c items=0 ppid=2256 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.502000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 8 23:37:02.504000 audit[2317]: NETFILTER_CFG table=nat:59 family=10 entries=1 op=nft_register_rule pid=2317 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:02.504000 audit[2317]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffd2cdba6f0 a2=0 a3=7ffd2cdba6dc items=0 ppid=2256 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.504000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 8 23:37:02.506000 audit[2319]: NETFILTER_CFG table=nat:60 family=10 entries=1 op=nft_register_rule pid=2319 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:02.506000 audit[2319]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffe375b9850 a2=0 a3=7ffe375b983c items=0 ppid=2256 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.506000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 8 23:37:02.523000 audit[2321]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_rule pid=2321 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:02.523000 audit[2321]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7fff427d5320 a2=0 a3=7fff427d530c items=0 ppid=2256 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.523000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 8 23:37:02.524594 kubelet[2256]: I0208 23:37:02.524555 2256 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:37:02.524594 kubelet[2256]: I0208 23:37:02.524576 2256 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:37:02.524721 kubelet[2256]: I0208 23:37:02.524609 2256 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:37:02.524721 kubelet[2256]: E0208 23:37:02.524656 2256 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 8 23:37:02.525281 kubelet[2256]: W0208 23:37:02.525243 2256 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:02.525364 kubelet[2256]: E0208 23:37:02.525291 2256 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:02.525000 audit[2322]: NETFILTER_CFG table=mangle:62 family=10 entries=1 op=nft_register_chain pid=2322 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:02.525000 audit[2322]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3aef6cf0 a2=0 a3=7ffc3aef6cdc items=0 ppid=2256 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.525000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 8 23:37:02.526000 audit[2323]: NETFILTER_CFG table=nat:63 family=10 entries=1 op=nft_register_chain pid=2323 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:02.526000 audit[2323]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9f877390 a2=0 a3=7ffd9f87737c items=0 ppid=2256 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.526000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 8 23:37:02.527000 audit[2324]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_chain pid=2324 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:02.527000 audit[2324]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8b38d1d0 a2=0 a3=7ffe8b38d1bc items=0 ppid=2256 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:02.527000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 8 23:37:02.578752 kubelet[2256]: E0208 23:37:02.578600 2256 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-9933156126?timeout=10s": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:02.624909 kubelet[2256]: I0208 23:37:02.624863 2256 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:37:02.626714 kubelet[2256]: I0208 23:37:02.626688 2256 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:37:02.628177 kubelet[2256]: I0208 23:37:02.628154 2256 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:37:02.630606 kubelet[2256]: I0208 23:37:02.630582 2256 status_manager.go:698] "Failed to get status for pod" podUID=bcfa72e7cf8f4ed43bfe2ef57b11e5f6 pod="kube-system/kube-apiserver-ci-3510.3.2-a-9933156126" err="Get \"https://10.200.8.4:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-9933156126\": dial tcp 10.200.8.4:6443: connect: connection refused" Feb 8 23:37:02.641600 kubelet[2256]: I0208 23:37:02.641570 2256 status_manager.go:698] "Failed to get status for pod" podUID=29b4916a2c133d0e7aa1ce6689159bdf pod="kube-system/kube-scheduler-ci-3510.3.2-a-9933156126" err="Get \"https://10.200.8.4:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-9933156126\": dial tcp 10.200.8.4:6443: connect: connection refused" Feb 8 23:37:02.641835 kubelet[2256]: I0208 23:37:02.641817 2256 status_manager.go:698] "Failed to get status for pod" podUID=8a49eb83921b74541969258c18270f6a pod="kube-system/kube-controller-manager-ci-3510.3.2-a-9933156126" err="Get \"https://10.200.8.4:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-9933156126\": dial tcp 10.200.8.4:6443: connect: connection refused" Feb 8 23:37:02.653910 kubelet[2256]: I0208 23:37:02.653890 2256 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bcfa72e7cf8f4ed43bfe2ef57b11e5f6-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-9933156126\" (UID: \"bcfa72e7cf8f4ed43bfe2ef57b11e5f6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-9933156126" Feb 8 23:37:02.653998 kubelet[2256]: I0208 23:37:02.653932 2256 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a49eb83921b74541969258c18270f6a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-9933156126\" (UID: \"8a49eb83921b74541969258c18270f6a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-9933156126" Feb 8 23:37:02.653998 kubelet[2256]: I0208 23:37:02.653965 2256 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a49eb83921b74541969258c18270f6a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-9933156126\" (UID: \"8a49eb83921b74541969258c18270f6a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-9933156126" Feb 8 23:37:02.653998 kubelet[2256]: I0208 23:37:02.653992 2256 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bcfa72e7cf8f4ed43bfe2ef57b11e5f6-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-9933156126\" (UID: \"bcfa72e7cf8f4ed43bfe2ef57b11e5f6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-9933156126" Feb 8 23:37:02.654133 kubelet[2256]: I0208 23:37:02.654024 2256 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bcfa72e7cf8f4ed43bfe2ef57b11e5f6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-9933156126\" (UID: \"bcfa72e7cf8f4ed43bfe2ef57b11e5f6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-9933156126" Feb 8 23:37:02.654133 kubelet[2256]: I0208 23:37:02.654054 2256 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a49eb83921b74541969258c18270f6a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-9933156126\" (UID: \"8a49eb83921b74541969258c18270f6a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-9933156126" Feb 8 23:37:02.654133 kubelet[2256]: I0208 23:37:02.654092 2256 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a49eb83921b74541969258c18270f6a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-9933156126\" (UID: \"8a49eb83921b74541969258c18270f6a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-9933156126" Feb 8 23:37:02.654133 kubelet[2256]: I0208 23:37:02.654123 2256 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a49eb83921b74541969258c18270f6a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-9933156126\" (UID: \"8a49eb83921b74541969258c18270f6a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-9933156126" Feb 8 23:37:02.654277 kubelet[2256]: I0208 23:37:02.654154 2256 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29b4916a2c133d0e7aa1ce6689159bdf-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-9933156126\" (UID: \"29b4916a2c133d0e7aa1ce6689159bdf\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-9933156126" Feb 8 23:37:02.655962 kubelet[2256]: I0208 23:37:02.655949 2256 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:02.656219 kubelet[2256]: E0208 23:37:02.656199 2256 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:02.937074 env[1405]: time="2024-02-08T23:37:02.936525808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-9933156126,Uid:bcfa72e7cf8f4ed43bfe2ef57b11e5f6,Namespace:kube-system,Attempt:0,}" Feb 8 23:37:02.938131 env[1405]: time="2024-02-08T23:37:02.938083534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-9933156126,Uid:29b4916a2c133d0e7aa1ce6689159bdf,Namespace:kube-system,Attempt:0,}" Feb 8 23:37:02.938611 env[1405]: time="2024-02-08T23:37:02.938582942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-9933156126,Uid:8a49eb83921b74541969258c18270f6a,Namespace:kube-system,Attempt:0,}" Feb 8 23:37:02.979901 kubelet[2256]: E0208 23:37:02.979862 2256 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-9933156126?timeout=10s": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:03.058855 kubelet[2256]: I0208 23:37:03.058821 2256 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:03.059229 kubelet[2256]: E0208 23:37:03.059191 2256 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:03.383937 kubelet[2256]: W0208 23:37:03.383875 2256 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:03.383937 kubelet[2256]: E0208 23:37:03.383940 2256 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:03.426399 kubelet[2256]: W0208 23:37:03.426340 2256 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-9933156126&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:03.426545 kubelet[2256]: E0208 23:37:03.426406 2256 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-9933156126&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:03.525501 kubelet[2256]: W0208 23:37:03.525430 2256 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:03.525501 kubelet[2256]: E0208 23:37:03.525508 2256 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:03.607001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount658630912.mount: Deactivated successfully. Feb 8 23:37:03.633443 env[1405]: time="2024-02-08T23:37:03.633383254Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:03.638002 env[1405]: time="2024-02-08T23:37:03.637897626Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:03.645486 env[1405]: time="2024-02-08T23:37:03.645454647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:03.651408 env[1405]: time="2024-02-08T23:37:03.651369841Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:03.653931 env[1405]: time="2024-02-08T23:37:03.653898281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:03.658212 env[1405]: time="2024-02-08T23:37:03.658180550Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:03.664757 env[1405]: time="2024-02-08T23:37:03.664724054Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:03.667433 env[1405]: time="2024-02-08T23:37:03.667401397Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:03.671758 env[1405]: time="2024-02-08T23:37:03.671727266Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:03.674854 env[1405]: time="2024-02-08T23:37:03.674824215Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:03.678486 env[1405]: time="2024-02-08T23:37:03.678451073Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:03.686257 env[1405]: time="2024-02-08T23:37:03.686227997Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:03.767931 env[1405]: time="2024-02-08T23:37:03.767859899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:37:03.768097 env[1405]: time="2024-02-08T23:37:03.767906700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:37:03.768097 env[1405]: time="2024-02-08T23:37:03.767920900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:37:03.768374 env[1405]: time="2024-02-08T23:37:03.768323606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:37:03.768510 env[1405]: time="2024-02-08T23:37:03.768358407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:37:03.768510 env[1405]: time="2024-02-08T23:37:03.768372407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:37:03.768692 env[1405]: time="2024-02-08T23:37:03.768541110Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aaa1430c059533be6dd488a060de48b3c23c0f91ca15ad344afb2e67cff30dd2 pid=2337 runtime=io.containerd.runc.v2 Feb 8 23:37:03.769559 env[1405]: time="2024-02-08T23:37:03.769210521Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d65f4075920fa1b3fd0e25fe9f7f255a2c8bf93baa5674d0f4300790f5c77052 pid=2340 runtime=io.containerd.runc.v2 Feb 8 23:37:03.780581 kubelet[2256]: E0208 23:37:03.780536 2256 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-9933156126?timeout=10s": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:03.789680 env[1405]: time="2024-02-08T23:37:03.789600646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:37:03.789856 env[1405]: time="2024-02-08T23:37:03.789825249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:37:03.790070 env[1405]: time="2024-02-08T23:37:03.790038753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:37:03.790326 env[1405]: time="2024-02-08T23:37:03.790295657Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/25c58fb7f23c29974576e73b34a2c8a9c043a963bc2a4a949e4915cccdba610b pid=2374 runtime=io.containerd.runc.v2 Feb 8 23:37:03.863894 kubelet[2256]: W0208 23:37:03.863843 2256 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:03.864090 kubelet[2256]: E0208 23:37:03.864079 2256 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:03.864854 kubelet[2256]: I0208 23:37:03.864814 2256 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:03.865216 kubelet[2256]: E0208 23:37:03.865201 2256 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:03.877107 env[1405]: time="2024-02-08T23:37:03.877061241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-9933156126,Uid:bcfa72e7cf8f4ed43bfe2ef57b11e5f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaa1430c059533be6dd488a060de48b3c23c0f91ca15ad344afb2e67cff30dd2\"" Feb 8 23:37:03.880929 env[1405]: time="2024-02-08T23:37:03.880895202Z" level=info msg="CreateContainer within sandbox \"aaa1430c059533be6dd488a060de48b3c23c0f91ca15ad344afb2e67cff30dd2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 8 23:37:03.916379 env[1405]: time="2024-02-08T23:37:03.914790643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-9933156126,Uid:8a49eb83921b74541969258c18270f6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d65f4075920fa1b3fd0e25fe9f7f255a2c8bf93baa5674d0f4300790f5c77052\"" Feb 8 23:37:03.920178 env[1405]: time="2024-02-08T23:37:03.920144128Z" level=info msg="CreateContainer within sandbox \"d65f4075920fa1b3fd0e25fe9f7f255a2c8bf93baa5674d0f4300790f5c77052\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 8 23:37:03.923260 env[1405]: time="2024-02-08T23:37:03.923162076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-9933156126,Uid:29b4916a2c133d0e7aa1ce6689159bdf,Namespace:kube-system,Attempt:0,} returns sandbox id \"25c58fb7f23c29974576e73b34a2c8a9c043a963bc2a4a949e4915cccdba610b\"" Feb 8 23:37:03.925171 env[1405]: time="2024-02-08T23:37:03.925143908Z" level=info msg="CreateContainer within sandbox \"25c58fb7f23c29974576e73b34a2c8a9c043a963bc2a4a949e4915cccdba610b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 8 23:37:04.417454 kubelet[2256]: E0208 23:37:04.417417 2256 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:05.064265 kubelet[2256]: W0208 23:37:05.064164 2256 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-9933156126&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:05.064265 kubelet[2256]: E0208 23:37:05.064215 2256 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-9933156126&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:06.057921 kubelet[2256]: E0208 23:37:05.381535 2256 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-9933156126?timeout=10s": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:06.057921 kubelet[2256]: W0208 23:37:05.410005 2256 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:06.057921 kubelet[2256]: E0208 23:37:05.410037 2256 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:06.057921 kubelet[2256]: I0208 23:37:05.467182 2256 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:06.057921 kubelet[2256]: E0208 23:37:05.467526 2256 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:06.057921 kubelet[2256]: W0208 23:37:05.561077 2256 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:06.057921 kubelet[2256]: E0208 23:37:05.561114 2256 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:06.833847 kubelet[2256]: W0208 23:37:06.833802 2256 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:06.833847 kubelet[2256]: E0208 23:37:06.833847 2256 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:07.101975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount32856362.mount: Deactivated successfully. Feb 8 23:37:08.533814 kubelet[2256]: E0208 23:37:08.533775 2256 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:08.582604 kubelet[2256]: E0208 23:37:08.582552 2256 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: Get "https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-9933156126?timeout=10s": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:08.669799 kubelet[2256]: I0208 23:37:08.669764 2256 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:08.670155 kubelet[2256]: E0208 23:37:08.670120 2256 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:08.815718 kubelet[2256]: W0208 23:37:08.815577 2256 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-9933156126&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:08.815718 kubelet[2256]: E0208 23:37:08.815626 2256 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-9933156126&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:09.124565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3421295536.mount: Deactivated successfully. Feb 8 23:37:09.136710 kubelet[2256]: W0208 23:37:09.136676 2256 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:09.136710 kubelet[2256]: E0208 23:37:09.136715 2256 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 8 23:37:09.143375 env[1405]: time="2024-02-08T23:37:09.143310630Z" level=info msg="CreateContainer within sandbox \"aaa1430c059533be6dd488a060de48b3c23c0f91ca15ad344afb2e67cff30dd2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"78afc2057092495ab3d9694e0b56a67f0711d70ce12bbce602355df7c6493903\"" Feb 8 23:37:09.146831 env[1405]: time="2024-02-08T23:37:09.146798577Z" level=info msg="CreateContainer within sandbox \"25c58fb7f23c29974576e73b34a2c8a9c043a963bc2a4a949e4915cccdba610b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f26bead6eddc59cb3a687051d8045b8fea4f8905a161723ae31ef08e15933976\"" Feb 8 23:37:09.147057 env[1405]: time="2024-02-08T23:37:09.147028581Z" level=info msg="StartContainer for \"78afc2057092495ab3d9694e0b56a67f0711d70ce12bbce602355df7c6493903\"" Feb 8 23:37:09.151652 env[1405]: time="2024-02-08T23:37:09.151616043Z" level=info msg="StartContainer for \"f26bead6eddc59cb3a687051d8045b8fea4f8905a161723ae31ef08e15933976\"" Feb 8 23:37:09.151990 env[1405]: time="2024-02-08T23:37:09.151957048Z" level=info msg="CreateContainer within sandbox \"d65f4075920fa1b3fd0e25fe9f7f255a2c8bf93baa5674d0f4300790f5c77052\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e097d50b94debd823370d0d5acd6340ede6623194246d2ceeb2fa6595e8306c\"" Feb 8 23:37:09.155732 env[1405]: time="2024-02-08T23:37:09.152404854Z" level=info msg="StartContainer for \"6e097d50b94debd823370d0d5acd6340ede6623194246d2ceeb2fa6595e8306c\"" Feb 8 23:37:09.283060 env[1405]: time="2024-02-08T23:37:09.282999945Z" level=info msg="StartContainer for \"78afc2057092495ab3d9694e0b56a67f0711d70ce12bbce602355df7c6493903\" returns successfully" Feb 8 23:37:09.315615 env[1405]: time="2024-02-08T23:37:09.315570492Z" level=info msg="StartContainer for \"f26bead6eddc59cb3a687051d8045b8fea4f8905a161723ae31ef08e15933976\" returns successfully" Feb 8 23:37:09.324911 env[1405]: time="2024-02-08T23:37:09.324868719Z" level=info msg="StartContainer for \"6e097d50b94debd823370d0d5acd6340ede6623194246d2ceeb2fa6595e8306c\" returns successfully" Feb 8 23:37:12.346642 kubelet[2256]: E0208 23:37:12.346584 2256 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.2-a-9933156126" not found Feb 8 23:37:12.447581 kubelet[2256]: E0208 23:37:12.447137 2256 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-9933156126.17b207786160a6a0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-9933156126", UID:"ci-3510.3.2-a-9933156126", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-9933156126"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 338033312, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 338033312, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:37:12.447997 kubelet[2256]: E0208 23:37:12.447947 2256 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-9933156126\" not found" Feb 8 23:37:12.503536 kubelet[2256]: E0208 23:37:12.503436 2256 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-9933156126.17b2077861f765e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-9933156126", UID:"ci-3510.3.2-a-9933156126", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-9933156126"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 347912674, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 347912674, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:37:12.558575 kubelet[2256]: E0208 23:37:12.558454 2256 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-9933156126.17b2077866e563a0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-9933156126", UID:"ci-3510.3.2-a-9933156126", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-9933156126 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-9933156126"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 430618528, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 430618528, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:37:12.619134 kubelet[2256]: E0208 23:37:12.618952 2256 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-9933156126.17b2077866e58efc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-9933156126", UID:"ci-3510.3.2-a-9933156126", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-9933156126 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-9933156126"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 430629628, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 430629628, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:37:12.674398 kubelet[2256]: E0208 23:37:12.674303 2256 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-9933156126.17b2077866e59dd4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-9933156126", UID:"ci-3510.3.2-a-9933156126", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-9933156126 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-9933156126"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 430633428, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 430633428, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:37:12.727422 kubelet[2256]: E0208 23:37:12.727327 2256 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-9933156126.17b2077867d9e21a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-9933156126", UID:"ci-3510.3.2-a-9933156126", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-9933156126"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 446641690, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 446641690, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:37:12.782678 kubelet[2256]: E0208 23:37:12.782549 2256 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-9933156126.17b2077866e563a0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-9933156126", UID:"ci-3510.3.2-a-9933156126", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-9933156126 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-9933156126"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 430618528, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 453528103, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:37:12.837865 kubelet[2256]: E0208 23:37:12.837746 2256 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-9933156126.17b2077866e58efc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-9933156126", UID:"ci-3510.3.2-a-9933156126", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-9933156126 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-9933156126"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 430629628, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 453541003, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:37:12.895203 kubelet[2256]: E0208 23:37:12.895010 2256 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-9933156126.17b2077866e59dd4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-9933156126", UID:"ci-3510.3.2-a-9933156126", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-9933156126 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-9933156126"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 430633428, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 453544803, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:37:13.246575 kubelet[2256]: E0208 23:37:13.246462 2256 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-9933156126.17b2077866e563a0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-9933156126", UID:"ci-3510.3.2-a-9933156126", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-9933156126 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-9933156126"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 430618528, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 37, 2, 626573935, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:37:13.784911 kubelet[2256]: E0208 23:37:13.784873 2256 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.2-a-9933156126" not found Feb 8 23:37:14.987314 kubelet[2256]: E0208 23:37:14.987279 2256 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-9933156126\" not found" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:15.072465 kubelet[2256]: I0208 23:37:15.072431 2256 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:15.187540 kubelet[2256]: I0208 23:37:15.187504 2256 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:15.195651 kubelet[2256]: E0208 23:37:15.195619 2256 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-9933156126\" not found" Feb 8 23:37:15.296734 kubelet[2256]: E0208 23:37:15.296581 2256 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-9933156126\" not found" Feb 8 23:37:15.396764 kubelet[2256]: E0208 23:37:15.396733 2256 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-9933156126\" not found" Feb 8 23:37:15.497405 kubelet[2256]: E0208 23:37:15.497367 2256 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-9933156126\" not found" Feb 8 23:37:15.598609 kubelet[2256]: E0208 23:37:15.598488 2256 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-9933156126\" not found" Feb 8 23:37:15.699155 kubelet[2256]: E0208 23:37:15.699116 2256 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-9933156126\" not found" Feb 8 23:37:15.800432 kubelet[2256]: E0208 23:37:15.800388 2256 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-9933156126\" not found" Feb 8 23:37:15.901256 kubelet[2256]: E0208 23:37:15.901132 2256 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-9933156126\" not found" Feb 8 23:37:15.923619 systemd[1]: Reloading. Feb 8 23:37:16.001312 kubelet[2256]: E0208 23:37:16.001271 2256 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-9933156126\" not found" Feb 8 23:37:16.020266 /usr/lib/systemd/system-generators/torcx-generator[2581]: time="2024-02-08T23:37:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:37:16.024465 /usr/lib/systemd/system-generators/torcx-generator[2581]: time="2024-02-08T23:37:16Z" level=info msg="torcx already run" Feb 8 23:37:16.121516 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:37:16.121535 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:37:16.140268 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:37:16.238690 kubelet[2256]: I0208 23:37:16.238443 2256 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:37:16.238750 systemd[1]: Stopping kubelet.service... Feb 8 23:37:16.254169 systemd[1]: kubelet.service: Deactivated successfully. Feb 8 23:37:16.254578 systemd[1]: Stopped kubelet.service. Feb 8 23:37:16.260184 kernel: kauditd_printk_skb: 108 callbacks suppressed Feb 8 23:37:16.260259 kernel: audit: type=1131 audit(1707435436.252:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:16.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:16.257044 systemd[1]: Started kubelet.service. Feb 8 23:37:16.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:16.289684 kernel: audit: type=1130 audit(1707435436.253:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:16.358145 kubelet[2651]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:37:16.358145 kubelet[2651]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:37:16.358583 kubelet[2651]: I0208 23:37:16.358191 2651 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:37:16.359698 kubelet[2651]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:37:16.359698 kubelet[2651]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:37:16.362577 kubelet[2651]: I0208 23:37:16.362560 2651 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:37:16.362709 kubelet[2651]: I0208 23:37:16.362697 2651 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:37:16.362946 kubelet[2651]: I0208 23:37:16.362934 2651 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:37:16.364081 kubelet[2651]: I0208 23:37:16.364067 2651 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 8 23:37:16.364929 kubelet[2651]: I0208 23:37:16.364915 2651 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:37:16.368291 kubelet[2651]: I0208 23:37:16.368267 2651 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:37:16.368838 kubelet[2651]: I0208 23:37:16.368826 2651 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:37:16.368961 kubelet[2651]: I0208 23:37:16.368950 2651 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:37:16.369072 kubelet[2651]: I0208 23:37:16.369065 2651 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:37:16.369121 kubelet[2651]: I0208 23:37:16.369116 2651 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:37:16.369190 kubelet[2651]: I0208 23:37:16.369185 2651 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:37:16.372285 kubelet[2651]: I0208 23:37:16.372270 2651 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:37:16.372411 kubelet[2651]: I0208 23:37:16.372382 2651 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:37:16.372492 kubelet[2651]: I0208 23:37:16.372482 2651 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:37:16.372573 kubelet[2651]: I0208 23:37:16.372567 2651 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:37:16.375419 kubelet[2651]: I0208 23:37:16.375396 2651 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:37:16.394607 kubelet[2651]: I0208 23:37:16.389853 2651 server.go:1186] "Started kubelet" Feb 8 23:37:16.396000 audit[2651]: AVC avc: denied { mac_admin } for pid=2651 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:37:16.411943 kubelet[2651]: E0208 23:37:16.398506 2651 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:37:16.411943 kubelet[2651]: E0208 23:37:16.398540 2651 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:37:16.411943 kubelet[2651]: I0208 23:37:16.405987 2651 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:37:16.411943 kubelet[2651]: I0208 23:37:16.406598 2651 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:37:16.412265 kubelet[2651]: I0208 23:37:16.412242 2651 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 8 23:37:16.412470 kubelet[2651]: I0208 23:37:16.412423 2651 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 8 23:37:16.412590 kubelet[2651]: I0208 23:37:16.412579 2651 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:37:16.412675 kernel: audit: type=1400 audit(1707435436.396:232): avc: denied { mac_admin } for pid=2651 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:37:16.414997 kubelet[2651]: I0208 23:37:16.414982 2651 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:37:16.415200 kubelet[2651]: I0208 23:37:16.415186 2651 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:37:16.396000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:37:16.424682 kernel: audit: type=1401 audit(1707435436.396:232): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:37:16.456196 kernel: audit: type=1300 audit(1707435436.396:232): arch=c000003e syscall=188 success=no exit=-22 a0=c00022def0 a1=c000ec1ed8 a2=c00022dec0 a3=25 items=0 ppid=1 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:16.396000 audit[2651]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00022def0 a1=c000ec1ed8 a2=c00022dec0 a3=25 items=0 ppid=1 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:16.396000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:37:16.476987 kernel: audit: type=1327 audit(1707435436.396:232): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:37:16.410000 audit[2651]: AVC avc: denied { mac_admin } for pid=2651 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:37:16.492536 kernel: audit: type=1400 audit(1707435436.410:233): avc: denied { mac_admin } for pid=2651 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:37:16.410000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:37:16.508675 kernel: audit: type=1401 audit(1707435436.410:233): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:37:16.508752 kernel: audit: type=1300 audit(1707435436.410:233): arch=c000003e syscall=188 success=no exit=-22 a0=c000b65a20 a1=c000ec1ef0 a2=c000bd8000 a3=25 items=0 ppid=1 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:16.410000 audit[2651]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b65a20 a1=c000ec1ef0 a2=c000bd8000 a3=25 items=0 ppid=1 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:16.410000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:37:16.546025 kernel: audit: type=1327 audit(1707435436.410:233): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:37:16.559353 kubelet[2651]: I0208 23:37:16.559333 2651 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:37:16.580848 kubelet[2651]: I0208 23:37:16.580824 2651 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:16.605866 kubelet[2651]: I0208 23:37:16.605840 2651 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:16.606088 kubelet[2651]: I0208 23:37:16.606064 2651 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-9933156126" Feb 8 23:37:16.651956 kubelet[2651]: I0208 23:37:16.651933 2651 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:37:16.652136 kubelet[2651]: I0208 23:37:16.652125 2651 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:37:16.652219 kubelet[2651]: I0208 23:37:16.652211 2651 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:37:16.652431 kubelet[2651]: I0208 23:37:16.652418 2651 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 8 23:37:16.652519 kubelet[2651]: I0208 23:37:16.652511 2651 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 8 23:37:16.652579 kubelet[2651]: I0208 23:37:16.652571 2651 policy_none.go:49] "None policy: Start" Feb 8 23:37:16.653291 kubelet[2651]: I0208 23:37:16.653273 2651 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:37:16.653410 kubelet[2651]: I0208 23:37:16.653401 2651 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:37:16.653646 kubelet[2651]: I0208 23:37:16.653633 2651 state_mem.go:75] "Updated machine memory state" Feb 8 23:37:16.655234 kubelet[2651]: I0208 23:37:16.655217 2651 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:37:16.653000 audit[2651]: AVC avc: denied { mac_admin } for pid=2651 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:37:16.653000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 8 23:37:16.653000 audit[2651]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0012040c0 a1=c001202168 a2=c001204090 a3=25 items=0 ppid=1 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:16.653000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 8 23:37:16.655706 kubelet[2651]: I0208 23:37:16.655690 2651 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 8 23:37:16.657479 kubelet[2651]: I0208 23:37:16.657463 2651 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:37:16.663060 kubelet[2651]: I0208 23:37:16.663044 2651 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:37:16.663175 kubelet[2651]: I0208 23:37:16.663165 2651 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:37:16.663249 kubelet[2651]: I0208 23:37:16.663241 2651 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:37:16.663369 kubelet[2651]: E0208 23:37:16.663359 2651 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 8 23:37:16.764215 kubelet[2651]: I0208 23:37:16.764092 2651 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:37:16.764459 kubelet[2651]: I0208 23:37:16.764444 2651 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:37:16.764568 kubelet[2651]: I0208 23:37:16.764558 2651 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:37:16.881836 kubelet[2651]: I0208 23:37:16.881803 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bcfa72e7cf8f4ed43bfe2ef57b11e5f6-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-9933156126\" (UID: \"bcfa72e7cf8f4ed43bfe2ef57b11e5f6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-9933156126" Feb 8 23:37:16.882102 kubelet[2651]: I0208 23:37:16.882083 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bcfa72e7cf8f4ed43bfe2ef57b11e5f6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-9933156126\" (UID: \"bcfa72e7cf8f4ed43bfe2ef57b11e5f6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-9933156126" Feb 8 23:37:16.882230 kubelet[2651]: I0208 23:37:16.882221 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a49eb83921b74541969258c18270f6a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-9933156126\" (UID: \"8a49eb83921b74541969258c18270f6a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-9933156126" Feb 8 23:37:16.882353 kubelet[2651]: I0208 23:37:16.882345 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a49eb83921b74541969258c18270f6a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-9933156126\" (UID: \"8a49eb83921b74541969258c18270f6a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-9933156126" Feb 8 23:37:16.882478 kubelet[2651]: I0208 23:37:16.882470 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a49eb83921b74541969258c18270f6a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-9933156126\" (UID: \"8a49eb83921b74541969258c18270f6a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-9933156126" Feb 8 23:37:16.882677 kubelet[2651]: I0208 23:37:16.882652 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bcfa72e7cf8f4ed43bfe2ef57b11e5f6-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-9933156126\" (UID: \"bcfa72e7cf8f4ed43bfe2ef57b11e5f6\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-9933156126" Feb 8 23:37:16.882820 kubelet[2651]: I0208 23:37:16.882810 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a49eb83921b74541969258c18270f6a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-9933156126\" (UID: \"8a49eb83921b74541969258c18270f6a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-9933156126" Feb 8 23:37:16.882948 kubelet[2651]: I0208 23:37:16.882938 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a49eb83921b74541969258c18270f6a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-9933156126\" (UID: \"8a49eb83921b74541969258c18270f6a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-9933156126" Feb 8 23:37:16.883082 kubelet[2651]: I0208 23:37:16.883073 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29b4916a2c133d0e7aa1ce6689159bdf-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-9933156126\" (UID: \"29b4916a2c133d0e7aa1ce6689159bdf\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-9933156126" Feb 8 23:37:17.373944 kubelet[2651]: I0208 23:37:17.373902 2651 apiserver.go:52] "Watching apiserver" Feb 8 23:37:17.415706 kubelet[2651]: I0208 23:37:17.415659 2651 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:37:17.485966 kubelet[2651]: I0208 23:37:17.485929 2651 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:37:17.738980 kubelet[2651]: E0208 23:37:17.738938 2651 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-9933156126\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-9933156126" Feb 8 23:37:17.891689 kubelet[2651]: E0208 23:37:17.890480 2651 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-9933156126\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-9933156126" Feb 8 23:37:17.981547 kubelet[2651]: E0208 23:37:17.981505 2651 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-9933156126\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-9933156126" Feb 8 23:37:18.778804 kubelet[2651]: I0208 23:37:18.778774 2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-9933156126" podStartSLOduration=2.7787192320000003 pod.CreationTimestamp="2024-02-08 23:37:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:37:18.778312227 +0000 UTC m=+2.505074352" watchObservedRunningTime="2024-02-08 23:37:18.778719232 +0000 UTC m=+2.505481457" Feb 8 23:37:18.779316 kubelet[2651]: I0208 23:37:18.779300 2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-9933156126" podStartSLOduration=2.779274538 pod.CreationTimestamp="2024-02-08 23:37:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:37:18.402029865 +0000 UTC m=+2.128791990" watchObservedRunningTime="2024-02-08 23:37:18.779274538 +0000 UTC m=+2.506036663" Feb 8 23:37:20.210278 sudo[1835]: pam_unix(sudo:session): session closed for user root Feb 8 23:37:20.208000 audit[1835]: USER_END pid=1835 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:37:20.208000 audit[1835]: CRED_DISP pid=1835 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 8 23:37:20.308031 sshd[1831]: pam_unix(sshd:session): session closed for user core Feb 8 23:37:20.308000 audit[1831]: USER_END pid=1831 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:37:20.308000 audit[1831]: CRED_DISP pid=1831 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:37:20.311824 systemd[1]: sshd@6-10.200.8.4:22-10.200.12.6:51780.service: Deactivated successfully. Feb 8 23:37:20.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.4:22-10.200.12.6:51780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:20.313400 systemd[1]: session-9.scope: Deactivated successfully. Feb 8 23:37:20.313967 systemd-logind[1392]: Session 9 logged out. Waiting for processes to exit. Feb 8 23:37:20.315303 systemd-logind[1392]: Removed session 9. Feb 8 23:37:23.970855 kubelet[2651]: I0208 23:37:23.970824 2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-9933156126" podStartSLOduration=7.970783094 pod.CreationTimestamp="2024-02-08 23:37:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:37:19.225128614 +0000 UTC m=+2.951890739" watchObservedRunningTime="2024-02-08 23:37:23.970783094 +0000 UTC m=+7.697545319" Feb 8 23:37:28.586323 kubelet[2651]: I0208 23:37:28.586292 2651 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 8 23:37:28.586842 env[1405]: time="2024-02-08T23:37:28.586784962Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:37:28.587167 kubelet[2651]: I0208 23:37:28.587019 2651 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 8 23:37:29.163962 kubelet[2651]: I0208 23:37:29.163911 2651 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:37:29.262526 kubelet[2651]: I0208 23:37:29.262493 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a536797a-d839-4c1c-ad63-a492dc407b65-xtables-lock\") pod \"kube-proxy-5xbb8\" (UID: \"a536797a-d839-4c1c-ad63-a492dc407b65\") " pod="kube-system/kube-proxy-5xbb8" Feb 8 23:37:29.262735 kubelet[2651]: I0208 23:37:29.262545 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bhd6\" (UniqueName: \"kubernetes.io/projected/a536797a-d839-4c1c-ad63-a492dc407b65-kube-api-access-7bhd6\") pod \"kube-proxy-5xbb8\" (UID: \"a536797a-d839-4c1c-ad63-a492dc407b65\") " pod="kube-system/kube-proxy-5xbb8" Feb 8 23:37:29.262735 kubelet[2651]: I0208 23:37:29.262578 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a536797a-d839-4c1c-ad63-a492dc407b65-kube-proxy\") pod \"kube-proxy-5xbb8\" (UID: \"a536797a-d839-4c1c-ad63-a492dc407b65\") " pod="kube-system/kube-proxy-5xbb8" Feb 8 23:37:29.262735 kubelet[2651]: I0208 23:37:29.262605 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a536797a-d839-4c1c-ad63-a492dc407b65-lib-modules\") pod \"kube-proxy-5xbb8\" (UID: \"a536797a-d839-4c1c-ad63-a492dc407b65\") " pod="kube-system/kube-proxy-5xbb8" Feb 8 23:37:29.291220 kubelet[2651]: I0208 23:37:29.291179 2651 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:37:29.362885 kubelet[2651]: I0208 23:37:29.362834 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6ee3e80a-8105-49ba-b59f-3072b52532a4-var-lib-calico\") pod \"tigera-operator-cfc98749c-77hb8\" (UID: \"6ee3e80a-8105-49ba-b59f-3072b52532a4\") " pod="tigera-operator/tigera-operator-cfc98749c-77hb8" Feb 8 23:37:29.363094 kubelet[2651]: I0208 23:37:29.362990 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdq2k\" (UniqueName: \"kubernetes.io/projected/6ee3e80a-8105-49ba-b59f-3072b52532a4-kube-api-access-cdq2k\") pod \"tigera-operator-cfc98749c-77hb8\" (UID: \"6ee3e80a-8105-49ba-b59f-3072b52532a4\") " pod="tigera-operator/tigera-operator-cfc98749c-77hb8" Feb 8 23:37:29.469914 env[1405]: time="2024-02-08T23:37:29.469868919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xbb8,Uid:a536797a-d839-4c1c-ad63-a492dc407b65,Namespace:kube-system,Attempt:0,}" Feb 8 23:37:29.502855 env[1405]: time="2024-02-08T23:37:29.502080300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:37:29.502855 env[1405]: time="2024-02-08T23:37:29.502180600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:37:29.502855 env[1405]: time="2024-02-08T23:37:29.502216101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:37:29.502855 env[1405]: time="2024-02-08T23:37:29.502498703Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec9b7f776d05c7e7cf56093b998aa962f186300653f9083fffb8f9c73dfc5723 pid=2756 runtime=io.containerd.runc.v2 Feb 8 23:37:29.549116 env[1405]: time="2024-02-08T23:37:29.549072808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xbb8,Uid:a536797a-d839-4c1c-ad63-a492dc407b65,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec9b7f776d05c7e7cf56093b998aa962f186300653f9083fffb8f9c73dfc5723\"" Feb 8 23:37:29.552367 env[1405]: time="2024-02-08T23:37:29.551717231Z" level=info msg="CreateContainer within sandbox \"ec9b7f776d05c7e7cf56093b998aa962f186300653f9083fffb8f9c73dfc5723\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:37:29.583713 env[1405]: time="2024-02-08T23:37:29.583660709Z" level=info msg="CreateContainer within sandbox \"ec9b7f776d05c7e7cf56093b998aa962f186300653f9083fffb8f9c73dfc5723\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1ee9af84abb31065edd9507fe7f5ee884ece8a0c29c64ca3184e3f64fbbf1ecb\"" Feb 8 23:37:29.585622 env[1405]: time="2024-02-08T23:37:29.584273415Z" level=info msg="StartContainer for \"1ee9af84abb31065edd9507fe7f5ee884ece8a0c29c64ca3184e3f64fbbf1ecb\"" Feb 8 23:37:29.595648 env[1405]: time="2024-02-08T23:37:29.595616913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-77hb8,Uid:6ee3e80a-8105-49ba-b59f-3072b52532a4,Namespace:tigera-operator,Attempt:0,}" Feb 8 23:37:29.627411 env[1405]: time="2024-02-08T23:37:29.627342589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:37:29.627608 env[1405]: time="2024-02-08T23:37:29.627412690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:37:29.627608 env[1405]: time="2024-02-08T23:37:29.627440490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:37:29.627895 env[1405]: time="2024-02-08T23:37:29.627781393Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/743714ce218efbb121a41dd63d82baf55de3f1ba743e9ea827bab2544785a2fe pid=2822 runtime=io.containerd.runc.v2 Feb 8 23:37:29.659790 env[1405]: time="2024-02-08T23:37:29.659734471Z" level=info msg="StartContainer for \"1ee9af84abb31065edd9507fe7f5ee884ece8a0c29c64ca3184e3f64fbbf1ecb\" returns successfully" Feb 8 23:37:29.727906 env[1405]: time="2024-02-08T23:37:29.727259058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-77hb8,Uid:6ee3e80a-8105-49ba-b59f-3072b52532a4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"743714ce218efbb121a41dd63d82baf55de3f1ba743e9ea827bab2544785a2fe\"" Feb 8 23:37:29.729418 env[1405]: time="2024-02-08T23:37:29.729388677Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 8 23:37:29.747699 kernel: kauditd_printk_skb: 9 callbacks suppressed Feb 8 23:37:29.747818 kernel: audit: type=1325 audit(1707435449.741:240): table=mangle:65 family=2 entries=1 op=nft_register_chain pid=2886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.741000 audit[2886]: NETFILTER_CFG table=mangle:65 family=2 entries=1 op=nft_register_chain pid=2886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.743000 audit[2887]: NETFILTER_CFG table=mangle:66 family=10 entries=1 op=nft_register_chain pid=2887 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:29.767695 kernel: audit: type=1325 audit(1707435449.743:241): table=mangle:66 family=10 entries=1 op=nft_register_chain pid=2887 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:29.767788 kernel: audit: type=1300 audit(1707435449.743:241): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffddad38d0 a2=0 a3=7fffddad38bc items=0 ppid=2808 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.743000 audit[2887]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffddad38d0 a2=0 a3=7fffddad38bc items=0 ppid=2808 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.743000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 8 23:37:29.793854 kernel: audit: type=1327 audit(1707435449.743:241): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 8 23:37:29.744000 audit[2888]: NETFILTER_CFG table=nat:67 family=10 entries=1 op=nft_register_chain pid=2888 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:29.803569 kernel: audit: type=1325 audit(1707435449.744:242): table=nat:67 family=10 entries=1 op=nft_register_chain pid=2888 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:29.803651 kernel: audit: type=1300 audit(1707435449.744:242): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1fad6ad0 a2=0 a3=7ffe1fad6abc items=0 ppid=2808 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.744000 audit[2888]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1fad6ad0 a2=0 a3=7ffe1fad6abc items=0 ppid=2808 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.744000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 8 23:37:29.831118 kernel: audit: type=1327 audit(1707435449.744:242): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 8 23:37:29.745000 audit[2889]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2889 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:29.840687 kernel: audit: type=1325 audit(1707435449.745:243): table=filter:68 family=10 entries=1 op=nft_register_chain pid=2889 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:29.840781 kernel: audit: type=1300 audit(1707435449.745:243): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd424ebe40 a2=0 a3=7ffd424ebe2c items=0 ppid=2808 pid=2889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.745000 audit[2889]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd424ebe40 a2=0 a3=7ffd424ebe2c items=0 ppid=2808 pid=2889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.745000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 8 23:37:29.870678 kernel: audit: type=1327 audit(1707435449.745:243): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 8 23:37:29.741000 audit[2886]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc69effe10 a2=0 a3=7ffc69effdfc items=0 ppid=2808 pid=2886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.741000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 8 23:37:29.756000 audit[2890]: NETFILTER_CFG table=nat:69 family=2 entries=1 op=nft_register_chain pid=2890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.756000 audit[2890]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4c7c4aa0 a2=0 a3=7ffc4c7c4a8c items=0 ppid=2808 pid=2890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.756000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 8 23:37:29.764000 audit[2891]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2891 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.764000 audit[2891]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc57a52d0 a2=0 a3=7ffcc57a52bc items=0 ppid=2808 pid=2891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.764000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 8 23:37:29.857000 audit[2892]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=2892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.857000 audit[2892]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff24657920 a2=0 a3=7fff2465790c items=0 ppid=2808 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.857000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 8 23:37:29.857000 audit[2894]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2894 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.857000 audit[2894]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd64804380 a2=0 a3=7ffd6480436c items=0 ppid=2808 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.857000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 8 23:37:29.863000 audit[2897]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=2897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.863000 audit[2897]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe6a54f100 a2=0 a3=7ffe6a54f0ec items=0 ppid=2808 pid=2897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.863000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 8 23:37:29.868000 audit[2898]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_chain pid=2898 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.868000 audit[2898]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc93c02580 a2=0 a3=7ffc93c0256c items=0 ppid=2808 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.868000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 8 23:37:29.872000 audit[2900]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=2900 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.872000 audit[2900]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd7a16ebe0 a2=0 a3=7ffd7a16ebcc items=0 ppid=2808 pid=2900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.872000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 8 23:37:29.873000 audit[2901]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_chain pid=2901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.873000 audit[2901]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffcc1f7000 a2=0 a3=7fffcc1f6fec items=0 ppid=2808 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.873000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 8 23:37:29.876000 audit[2903]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_rule pid=2903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.876000 audit[2903]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe0c1ed780 a2=0 a3=7ffe0c1ed76c items=0 ppid=2808 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.876000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 8 23:37:29.879000 audit[2906]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=2906 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.879000 audit[2906]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd6e619840 a2=0 a3=7ffd6e61982c items=0 ppid=2808 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.879000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 8 23:37:29.880000 audit[2907]: NETFILTER_CFG table=filter:79 family=2 entries=1 op=nft_register_chain pid=2907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.880000 audit[2907]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7e73b750 a2=0 a3=7ffc7e73b73c items=0 ppid=2808 pid=2907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.880000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 8 23:37:29.883000 audit[2909]: NETFILTER_CFG table=filter:80 family=2 entries=1 op=nft_register_rule pid=2909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.883000 audit[2909]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffb4c8a1b0 a2=0 a3=7fffb4c8a19c items=0 ppid=2808 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.883000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 8 23:37:29.884000 audit[2910]: NETFILTER_CFG table=filter:81 family=2 entries=1 op=nft_register_chain pid=2910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.884000 audit[2910]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5a0cca80 a2=0 a3=7ffe5a0cca6c items=0 ppid=2808 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.884000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 8 23:37:29.887000 audit[2912]: NETFILTER_CFG table=filter:82 family=2 entries=1 op=nft_register_rule pid=2912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.887000 audit[2912]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffc4d5bf70 a2=0 a3=7fffc4d5bf5c items=0 ppid=2808 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.887000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 8 23:37:29.890000 audit[2915]: NETFILTER_CFG table=filter:83 family=2 entries=1 op=nft_register_rule pid=2915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.890000 audit[2915]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff788b0e30 a2=0 a3=7fff788b0e1c items=0 ppid=2808 pid=2915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.890000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 8 23:37:29.894000 audit[2918]: NETFILTER_CFG table=filter:84 family=2 entries=1 op=nft_register_rule pid=2918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.894000 audit[2918]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff0a1f8730 a2=0 a3=7fff0a1f871c items=0 ppid=2808 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.894000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 8 23:37:29.895000 audit[2919]: NETFILTER_CFG table=nat:85 family=2 entries=1 op=nft_register_chain pid=2919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.895000 audit[2919]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe21f7e290 a2=0 a3=7ffe21f7e27c items=0 ppid=2808 pid=2919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.895000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 8 23:37:29.898000 audit[2921]: NETFILTER_CFG table=nat:86 family=2 entries=1 op=nft_register_rule pid=2921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.898000 audit[2921]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe6dc4c8a0 a2=0 a3=7ffe6dc4c88c items=0 ppid=2808 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.898000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 8 23:37:29.901000 audit[2924]: NETFILTER_CFG table=nat:87 family=2 entries=1 op=nft_register_rule pid=2924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 8 23:37:29.901000 audit[2924]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd55f07fe0 a2=0 a3=7ffd55f07fcc items=0 ppid=2808 pid=2924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.901000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 8 23:37:29.934000 audit[2928]: NETFILTER_CFG table=filter:88 family=2 entries=6 op=nft_register_rule pid=2928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:37:29.934000 audit[2928]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7fff8d2d4c40 a2=0 a3=7fff8d2d4c2c items=0 ppid=2808 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.934000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:37:29.971000 audit[2928]: NETFILTER_CFG table=nat:89 family=2 entries=17 op=nft_register_chain pid=2928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:37:29.971000 audit[2928]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fff8d2d4c40 a2=0 a3=7fff8d2d4c2c items=0 ppid=2808 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.971000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:37:29.973000 audit[2932]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_chain pid=2932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:29.973000 audit[2932]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe22fdc310 a2=0 a3=7ffe22fdc2fc items=0 ppid=2808 pid=2932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.973000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 8 23:37:29.977000 audit[2934]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=2934 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:29.977000 audit[2934]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc851ef950 a2=0 a3=7ffc851ef93c items=0 ppid=2808 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.977000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 8 23:37:29.982000 audit[2937]: NETFILTER_CFG table=filter:92 family=10 entries=2 op=nft_register_chain pid=2937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:29.982000 audit[2937]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc20f96d50 a2=0 a3=7ffc20f96d3c items=0 ppid=2808 pid=2937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 8 23:37:29.983000 audit[2938]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_chain pid=2938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:29.983000 audit[2938]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe363e4b40 a2=0 a3=7ffe363e4b2c items=0 ppid=2808 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.983000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 8 23:37:29.986000 audit[2940]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=2940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:29.986000 audit[2940]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc9a13a3c0 a2=0 a3=7ffc9a13a3ac items=0 ppid=2808 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.986000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 8 23:37:29.987000 audit[2941]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_chain pid=2941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:29.987000 audit[2941]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff39db0e30 a2=0 a3=7fff39db0e1c items=0 ppid=2808 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.987000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 8 23:37:29.989000 audit[2943]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=2943 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:29.989000 audit[2943]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc74f2c690 a2=0 a3=7ffc74f2c67c items=0 ppid=2808 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.989000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 8 23:37:29.993000 audit[2946]: NETFILTER_CFG table=filter:97 family=10 entries=2 op=nft_register_chain pid=2946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:29.993000 audit[2946]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff752b2ab0 a2=0 a3=7fff752b2a9c items=0 ppid=2808 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.993000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 8 23:37:29.994000 audit[2947]: NETFILTER_CFG table=filter:98 family=10 entries=1 op=nft_register_chain pid=2947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:29.994000 audit[2947]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb11ffa60 a2=0 a3=7ffdb11ffa4c items=0 ppid=2808 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.994000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 8 23:37:29.996000 audit[2949]: NETFILTER_CFG table=filter:99 family=10 entries=1 op=nft_register_rule pid=2949 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:29.996000 audit[2949]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe15ee9580 a2=0 a3=7ffe15ee956c items=0 ppid=2808 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.996000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 8 23:37:29.997000 audit[2950]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=2950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:29.997000 audit[2950]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe9fcc1960 a2=0 a3=7ffe9fcc194c items=0 ppid=2808 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:29.997000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 8 23:37:30.000000 audit[2952]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=2952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:30.000000 audit[2952]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff5e68b3b0 a2=0 a3=7fff5e68b39c items=0 ppid=2808 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:30.000000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 8 23:37:30.004000 audit[2955]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=2955 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:30.004000 audit[2955]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffcaff5420 a2=0 a3=7fffcaff540c items=0 ppid=2808 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:30.004000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 8 23:37:30.007000 audit[2958]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_rule pid=2958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:30.007000 audit[2958]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd71233830 a2=0 a3=7ffd7123381c items=0 ppid=2808 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:30.007000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 8 23:37:30.008000 audit[2959]: NETFILTER_CFG table=nat:104 family=10 entries=1 op=nft_register_chain pid=2959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:30.008000 audit[2959]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc6c5de3d0 a2=0 a3=7ffc6c5de3bc items=0 ppid=2808 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:30.008000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 8 23:37:30.011000 audit[2961]: NETFILTER_CFG table=nat:105 family=10 entries=2 op=nft_register_chain pid=2961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:30.011000 audit[2961]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffdbd7a7f20 a2=0 a3=7ffdbd7a7f0c items=0 ppid=2808 pid=2961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:30.011000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 8 23:37:30.014000 audit[2964]: NETFILTER_CFG table=nat:106 family=10 entries=2 op=nft_register_chain pid=2964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 8 23:37:30.014000 audit[2964]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffffc347290 a2=0 a3=7ffffc34727c items=0 ppid=2808 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:30.014000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 8 23:37:30.019000 audit[2968]: NETFILTER_CFG table=filter:107 family=10 entries=3 op=nft_register_rule pid=2968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 8 23:37:30.019000 audit[2968]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe8c56a0e0 a2=0 a3=7ffe8c56a0cc items=0 ppid=2808 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:30.019000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:37:30.020000 audit[2968]: NETFILTER_CFG table=nat:108 family=10 entries=10 op=nft_register_chain pid=2968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 8 23:37:30.020000 audit[2968]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffe8c56a0e0 a2=0 a3=7ffe8c56a0cc items=0 ppid=2808 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:30.020000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:37:32.096679 env[1405]: time="2024-02-08T23:37:32.096625794Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:32.101749 env[1405]: time="2024-02-08T23:37:32.101712136Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:32.105317 env[1405]: time="2024-02-08T23:37:32.105282365Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:32.108746 env[1405]: time="2024-02-08T23:37:32.108714393Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:32.109384 env[1405]: time="2024-02-08T23:37:32.109353699Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827\"" Feb 8 23:37:32.112950 env[1405]: time="2024-02-08T23:37:32.112585525Z" level=info msg="CreateContainer within sandbox \"743714ce218efbb121a41dd63d82baf55de3f1ba743e9ea827bab2544785a2fe\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 8 23:37:32.138127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912716145.mount: Deactivated successfully. Feb 8 23:37:32.146306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount642196964.mount: Deactivated successfully. Feb 8 23:37:32.154586 env[1405]: time="2024-02-08T23:37:32.154552868Z" level=info msg="CreateContainer within sandbox \"743714ce218efbb121a41dd63d82baf55de3f1ba743e9ea827bab2544785a2fe\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5e3a7cf7ab276d40845cfc53aa216d890c4e98986a5e47a1a2372cf541e46fbd\"" Feb 8 23:37:32.155079 env[1405]: time="2024-02-08T23:37:32.155050073Z" level=info msg="StartContainer for \"5e3a7cf7ab276d40845cfc53aa216d890c4e98986a5e47a1a2372cf541e46fbd\"" Feb 8 23:37:32.208293 env[1405]: time="2024-02-08T23:37:32.208251708Z" level=info msg="StartContainer for \"5e3a7cf7ab276d40845cfc53aa216d890c4e98986a5e47a1a2372cf541e46fbd\" returns successfully" Feb 8 23:37:32.714711 kubelet[2651]: I0208 23:37:32.714673 2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5xbb8" podStartSLOduration=3.714601152 pod.CreationTimestamp="2024-02-08 23:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:37:29.713808941 +0000 UTC m=+13.440571166" watchObservedRunningTime="2024-02-08 23:37:32.714601152 +0000 UTC m=+16.441363277" Feb 8 23:37:33.998000 audit[3032]: NETFILTER_CFG table=filter:109 family=2 entries=13 op=nft_register_rule pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:37:33.998000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffd73f92d00 a2=0 a3=7ffd73f92cec items=0 ppid=2808 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:33.998000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:37:33.999000 audit[3032]: NETFILTER_CFG table=nat:110 family=2 entries=20 op=nft_register_rule pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:37:33.999000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffd73f92d00 a2=0 a3=7ffd73f92cec items=0 ppid=2808 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:33.999000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:37:34.035000 audit[3058]: NETFILTER_CFG table=filter:111 family=2 entries=14 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:37:34.035000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7fff54c09ef0 a2=0 a3=7fff54c09edc items=0 ppid=2808 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:34.035000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:37:34.036000 audit[3058]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:37:34.036000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fff54c09ef0 a2=0 a3=7fff54c09edc items=0 ppid=2808 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:34.036000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:37:34.104024 kubelet[2651]: I0208 23:37:34.103981 2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-77hb8" podStartSLOduration=-9.22337203175084e+09 pod.CreationTimestamp="2024-02-08 23:37:29 +0000 UTC" firstStartedPulling="2024-02-08 23:37:29.728959673 +0000 UTC m=+13.455721898" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:37:32.715746162 +0000 UTC m=+16.442508387" watchObservedRunningTime="2024-02-08 23:37:34.10393653 +0000 UTC m=+17.830698755" Feb 8 23:37:34.104544 kubelet[2651]: I0208 23:37:34.104141 2651 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:37:34.193835 kubelet[2651]: I0208 23:37:34.193785 2651 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:37:34.201826 kubelet[2651]: I0208 23:37:34.201795 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x7gh\" (UniqueName: \"kubernetes.io/projected/1a9830f6-8e58-4b38-b9d7-746a3c92e95e-kube-api-access-6x7gh\") pod \"calico-typha-74794d48cc-5zzf7\" (UID: \"1a9830f6-8e58-4b38-b9d7-746a3c92e95e\") " pod="calico-system/calico-typha-74794d48cc-5zzf7" Feb 8 23:37:34.202117 kubelet[2651]: I0208 23:37:34.202100 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1a9830f6-8e58-4b38-b9d7-746a3c92e95e-typha-certs\") pod \"calico-typha-74794d48cc-5zzf7\" (UID: \"1a9830f6-8e58-4b38-b9d7-746a3c92e95e\") " pod="calico-system/calico-typha-74794d48cc-5zzf7" Feb 8 23:37:34.202261 kubelet[2651]: I0208 23:37:34.202249 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a9830f6-8e58-4b38-b9d7-746a3c92e95e-tigera-ca-bundle\") pod \"calico-typha-74794d48cc-5zzf7\" (UID: \"1a9830f6-8e58-4b38-b9d7-746a3c92e95e\") " pod="calico-system/calico-typha-74794d48cc-5zzf7" Feb 8 23:37:34.303124 kubelet[2651]: I0208 23:37:34.302993 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9f310efb-23cd-4dae-811c-7af35a331183-flexvol-driver-host\") pod \"calico-node-xn925\" (UID: \"9f310efb-23cd-4dae-811c-7af35a331183\") " pod="calico-system/calico-node-xn925" Feb 8 23:37:34.303446 kubelet[2651]: I0208 23:37:34.303391 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9f310efb-23cd-4dae-811c-7af35a331183-policysync\") pod \"calico-node-xn925\" (UID: \"9f310efb-23cd-4dae-811c-7af35a331183\") " pod="calico-system/calico-node-xn925" Feb 8 23:37:34.303619 kubelet[2651]: I0208 23:37:34.303605 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9f310efb-23cd-4dae-811c-7af35a331183-node-certs\") pod \"calico-node-xn925\" (UID: \"9f310efb-23cd-4dae-811c-7af35a331183\") " pod="calico-system/calico-node-xn925" Feb 8 23:37:34.303791 kubelet[2651]: I0208 23:37:34.303776 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9f310efb-23cd-4dae-811c-7af35a331183-cni-log-dir\") pod \"calico-node-xn925\" (UID: \"9f310efb-23cd-4dae-811c-7af35a331183\") " pod="calico-system/calico-node-xn925" Feb 8 23:37:34.303970 kubelet[2651]: I0208 23:37:34.303943 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjtpm\" (UniqueName: \"kubernetes.io/projected/9f310efb-23cd-4dae-811c-7af35a331183-kube-api-access-bjtpm\") pod \"calico-node-xn925\" (UID: \"9f310efb-23cd-4dae-811c-7af35a331183\") " pod="calico-system/calico-node-xn925" Feb 8 23:37:34.304181 kubelet[2651]: I0208 23:37:34.304167 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9f310efb-23cd-4dae-811c-7af35a331183-var-lib-calico\") pod \"calico-node-xn925\" (UID: \"9f310efb-23cd-4dae-811c-7af35a331183\") " pod="calico-system/calico-node-xn925" Feb 8 23:37:34.304638 kubelet[2651]: I0208 23:37:34.304621 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9f310efb-23cd-4dae-811c-7af35a331183-cni-bin-dir\") pod \"calico-node-xn925\" (UID: \"9f310efb-23cd-4dae-811c-7af35a331183\") " pod="calico-system/calico-node-xn925" Feb 8 23:37:34.304845 kubelet[2651]: I0208 23:37:34.304831 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f310efb-23cd-4dae-811c-7af35a331183-lib-modules\") pod \"calico-node-xn925\" (UID: \"9f310efb-23cd-4dae-811c-7af35a331183\") " pod="calico-system/calico-node-xn925" Feb 8 23:37:34.305901 kubelet[2651]: I0208 23:37:34.305874 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f310efb-23cd-4dae-811c-7af35a331183-tigera-ca-bundle\") pod \"calico-node-xn925\" (UID: \"9f310efb-23cd-4dae-811c-7af35a331183\") " pod="calico-system/calico-node-xn925" Feb 8 23:37:34.306060 kubelet[2651]: I0208 23:37:34.306039 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f310efb-23cd-4dae-811c-7af35a331183-xtables-lock\") pod \"calico-node-xn925\" (UID: \"9f310efb-23cd-4dae-811c-7af35a331183\") " pod="calico-system/calico-node-xn925" Feb 8 23:37:34.306171 kubelet[2651]: I0208 23:37:34.306161 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9f310efb-23cd-4dae-811c-7af35a331183-var-run-calico\") pod \"calico-node-xn925\" (UID: \"9f310efb-23cd-4dae-811c-7af35a331183\") " pod="calico-system/calico-node-xn925" Feb 8 23:37:34.306334 kubelet[2651]: I0208 23:37:34.306312 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9f310efb-23cd-4dae-811c-7af35a331183-cni-net-dir\") pod \"calico-node-xn925\" (UID: \"9f310efb-23cd-4dae-811c-7af35a331183\") " pod="calico-system/calico-node-xn925" Feb 8 23:37:34.322023 kubelet[2651]: I0208 23:37:34.322000 2651 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:37:34.323088 kubelet[2651]: E0208 23:37:34.322830 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:37:34.406834 kubelet[2651]: I0208 23:37:34.406794 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-655cc\" (UniqueName: \"kubernetes.io/projected/b23333af-8873-429e-8aa7-941ea237b3cf-kube-api-access-655cc\") pod \"csi-node-driver-pfb4q\" (UID: \"b23333af-8873-429e-8aa7-941ea237b3cf\") " pod="calico-system/csi-node-driver-pfb4q" Feb 8 23:37:34.406834 kubelet[2651]: I0208 23:37:34.406847 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b23333af-8873-429e-8aa7-941ea237b3cf-registration-dir\") pod \"csi-node-driver-pfb4q\" (UID: \"b23333af-8873-429e-8aa7-941ea237b3cf\") " pod="calico-system/csi-node-driver-pfb4q" Feb 8 23:37:34.407071 kubelet[2651]: I0208 23:37:34.406969 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b23333af-8873-429e-8aa7-941ea237b3cf-socket-dir\") pod \"csi-node-driver-pfb4q\" (UID: \"b23333af-8873-429e-8aa7-941ea237b3cf\") " pod="calico-system/csi-node-driver-pfb4q" Feb 8 23:37:34.407071 kubelet[2651]: I0208 23:37:34.407056 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b23333af-8873-429e-8aa7-941ea237b3cf-varrun\") pod \"csi-node-driver-pfb4q\" (UID: \"b23333af-8873-429e-8aa7-941ea237b3cf\") " pod="calico-system/csi-node-driver-pfb4q" Feb 8 23:37:34.407154 kubelet[2651]: I0208 23:37:34.407088 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b23333af-8873-429e-8aa7-941ea237b3cf-kubelet-dir\") pod \"csi-node-driver-pfb4q\" (UID: \"b23333af-8873-429e-8aa7-941ea237b3cf\") " pod="calico-system/csi-node-driver-pfb4q" Feb 8 23:37:34.409142 kubelet[2651]: E0208 23:37:34.409110 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.409142 kubelet[2651]: W0208 23:37:34.409138 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.409318 kubelet[2651]: E0208 23:37:34.409172 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.412025 kubelet[2651]: E0208 23:37:34.412001 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.412025 kubelet[2651]: W0208 23:37:34.412021 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.412185 kubelet[2651]: E0208 23:37:34.412042 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.413809 kubelet[2651]: E0208 23:37:34.413793 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.413942 kubelet[2651]: W0208 23:37:34.413928 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.414102 env[1405]: time="2024-02-08T23:37:34.414061571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74794d48cc-5zzf7,Uid:1a9830f6-8e58-4b38-b9d7-746a3c92e95e,Namespace:calico-system,Attempt:0,}" Feb 8 23:37:34.414645 kubelet[2651]: E0208 23:37:34.414615 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.414755 kubelet[2651]: W0208 23:37:34.414742 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.415781 kubelet[2651]: E0208 23:37:34.415744 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.415781 kubelet[2651]: E0208 23:37:34.415761 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.415928 kubelet[2651]: W0208 23:37:34.415792 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.415998 kubelet[2651]: E0208 23:37:34.415984 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.416081 kubelet[2651]: E0208 23:37:34.416026 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.416153 kubelet[2651]: W0208 23:37:34.416142 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.416221 kubelet[2651]: E0208 23:37:34.416213 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.416460 kubelet[2651]: E0208 23:37:34.416449 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.416539 kubelet[2651]: W0208 23:37:34.416528 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.416611 kubelet[2651]: E0208 23:37:34.416603 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.416683 kubelet[2651]: E0208 23:37:34.416040 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.427349 kubelet[2651]: E0208 23:37:34.427327 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.427349 kubelet[2651]: W0208 23:37:34.427347 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.427717 kubelet[2651]: E0208 23:37:34.427697 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.428522 kubelet[2651]: E0208 23:37:34.428501 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.428522 kubelet[2651]: W0208 23:37:34.428521 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.428687 kubelet[2651]: E0208 23:37:34.428542 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.429027 kubelet[2651]: E0208 23:37:34.428791 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.429027 kubelet[2651]: W0208 23:37:34.428804 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.429027 kubelet[2651]: E0208 23:37:34.428821 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.429188 kubelet[2651]: E0208 23:37:34.429043 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.429188 kubelet[2651]: W0208 23:37:34.429053 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.429188 kubelet[2651]: E0208 23:37:34.429135 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.429323 kubelet[2651]: E0208 23:37:34.429318 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.429369 kubelet[2651]: W0208 23:37:34.429327 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.429369 kubelet[2651]: E0208 23:37:34.429342 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.429559 kubelet[2651]: E0208 23:37:34.429543 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.429617 kubelet[2651]: W0208 23:37:34.429560 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.429617 kubelet[2651]: E0208 23:37:34.429575 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.441222 kubelet[2651]: E0208 23:37:34.441208 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.441332 kubelet[2651]: W0208 23:37:34.441323 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.441401 kubelet[2651]: E0208 23:37:34.441394 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.460599 env[1405]: time="2024-02-08T23:37:34.460526336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:37:34.460811 env[1405]: time="2024-02-08T23:37:34.460781738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:37:34.460915 env[1405]: time="2024-02-08T23:37:34.460898039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:37:34.467984 env[1405]: time="2024-02-08T23:37:34.461239242Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9576b66bde77f5c20b672ac5c8ccfb2a4307dfb3fac0618efbb206f11d3e4041 pid=3082 runtime=io.containerd.runc.v2 Feb 8 23:37:34.511251 kubelet[2651]: E0208 23:37:34.507598 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.511251 kubelet[2651]: W0208 23:37:34.507620 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.511251 kubelet[2651]: E0208 23:37:34.507646 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.511251 kubelet[2651]: E0208 23:37:34.509926 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.511251 kubelet[2651]: W0208 23:37:34.509941 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.511251 kubelet[2651]: E0208 23:37:34.509964 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.511251 kubelet[2651]: E0208 23:37:34.510161 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.511251 kubelet[2651]: W0208 23:37:34.510170 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.511251 kubelet[2651]: E0208 23:37:34.510184 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.511251 kubelet[2651]: E0208 23:37:34.510368 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.511871 kubelet[2651]: W0208 23:37:34.510378 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.511871 kubelet[2651]: E0208 23:37:34.510392 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.511871 kubelet[2651]: E0208 23:37:34.510572 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.511871 kubelet[2651]: W0208 23:37:34.510584 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.511871 kubelet[2651]: E0208 23:37:34.510597 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.511871 kubelet[2651]: E0208 23:37:34.510803 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.511871 kubelet[2651]: W0208 23:37:34.510813 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.511871 kubelet[2651]: E0208 23:37:34.510828 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.511871 kubelet[2651]: E0208 23:37:34.511037 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.511871 kubelet[2651]: W0208 23:37:34.511047 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.512271 kubelet[2651]: E0208 23:37:34.511062 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.512271 kubelet[2651]: E0208 23:37:34.511224 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.512271 kubelet[2651]: W0208 23:37:34.511234 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.512271 kubelet[2651]: E0208 23:37:34.511248 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.512271 kubelet[2651]: E0208 23:37:34.511402 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.512271 kubelet[2651]: W0208 23:37:34.511411 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.512271 kubelet[2651]: E0208 23:37:34.511424 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.512271 kubelet[2651]: E0208 23:37:34.511613 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.512271 kubelet[2651]: W0208 23:37:34.511621 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.512271 kubelet[2651]: E0208 23:37:34.511635 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.514128 kubelet[2651]: E0208 23:37:34.513987 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.514128 kubelet[2651]: W0208 23:37:34.514002 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.514128 kubelet[2651]: E0208 23:37:34.514019 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.514386 kubelet[2651]: E0208 23:37:34.514261 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.514386 kubelet[2651]: W0208 23:37:34.514271 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.514386 kubelet[2651]: E0208 23:37:34.514286 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.514510 kubelet[2651]: E0208 23:37:34.514443 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.514510 kubelet[2651]: W0208 23:37:34.514452 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.514510 kubelet[2651]: E0208 23:37:34.514466 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.514656 kubelet[2651]: E0208 23:37:34.514649 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.514988 kubelet[2651]: W0208 23:37:34.514659 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.515082 kubelet[2651]: E0208 23:37:34.514999 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.515276 kubelet[2651]: E0208 23:37:34.515257 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.515276 kubelet[2651]: W0208 23:37:34.515272 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.515410 kubelet[2651]: E0208 23:37:34.515292 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.515500 kubelet[2651]: E0208 23:37:34.515483 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.515500 kubelet[2651]: W0208 23:37:34.515495 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.515629 kubelet[2651]: E0208 23:37:34.515514 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.515735 kubelet[2651]: E0208 23:37:34.515720 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.515735 kubelet[2651]: W0208 23:37:34.515734 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.515847 kubelet[2651]: E0208 23:37:34.515752 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.516359 kubelet[2651]: E0208 23:37:34.516341 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.516359 kubelet[2651]: W0208 23:37:34.516354 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.516510 kubelet[2651]: E0208 23:37:34.516447 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.516602 kubelet[2651]: E0208 23:37:34.516585 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.516602 kubelet[2651]: W0208 23:37:34.516597 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.516777 kubelet[2651]: E0208 23:37:34.516689 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.516827 kubelet[2651]: E0208 23:37:34.516809 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.516827 kubelet[2651]: W0208 23:37:34.516818 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.516934 kubelet[2651]: E0208 23:37:34.516837 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.517036 kubelet[2651]: E0208 23:37:34.517018 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.517036 kubelet[2651]: W0208 23:37:34.517030 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.517344 kubelet[2651]: E0208 23:37:34.517049 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.529192 kubelet[2651]: E0208 23:37:34.526739 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.529192 kubelet[2651]: W0208 23:37:34.526758 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.529192 kubelet[2651]: E0208 23:37:34.526865 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.529192 kubelet[2651]: E0208 23:37:34.527739 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.529192 kubelet[2651]: W0208 23:37:34.527752 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.529192 kubelet[2651]: E0208 23:37:34.527775 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.529192 kubelet[2651]: E0208 23:37:34.527972 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.529192 kubelet[2651]: W0208 23:37:34.527982 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.530155 kubelet[2651]: E0208 23:37:34.530132 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.530368 kubelet[2651]: E0208 23:37:34.530352 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.530368 kubelet[2651]: W0208 23:37:34.530366 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.530476 kubelet[2651]: E0208 23:37:34.530386 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.532862 kubelet[2651]: E0208 23:37:34.532481 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.532862 kubelet[2651]: W0208 23:37:34.532496 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.532862 kubelet[2651]: E0208 23:37:34.532512 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.612004 kubelet[2651]: E0208 23:37:34.611887 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.612004 kubelet[2651]: W0208 23:37:34.611913 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.612004 kubelet[2651]: E0208 23:37:34.611937 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.612258 kubelet[2651]: E0208 23:37:34.612181 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.612258 kubelet[2651]: W0208 23:37:34.612191 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.612258 kubelet[2651]: E0208 23:37:34.612207 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.681065 env[1405]: time="2024-02-08T23:37:34.681017371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74794d48cc-5zzf7,Uid:1a9830f6-8e58-4b38-b9d7-746a3c92e95e,Namespace:calico-system,Attempt:0,} returns sandbox id \"9576b66bde77f5c20b672ac5c8ccfb2a4307dfb3fac0618efbb206f11d3e4041\"" Feb 8 23:37:34.682983 env[1405]: time="2024-02-08T23:37:34.682949086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 8 23:37:34.713183 kubelet[2651]: E0208 23:37:34.712842 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.713183 kubelet[2651]: W0208 23:37:34.712862 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.713183 kubelet[2651]: E0208 23:37:34.712884 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.713183 kubelet[2651]: E0208 23:37:34.713103 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.713183 kubelet[2651]: W0208 23:37:34.713113 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.713183 kubelet[2651]: E0208 23:37:34.713134 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.719379 kubelet[2651]: E0208 23:37:34.719361 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.719528 kubelet[2651]: W0208 23:37:34.719517 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.719616 kubelet[2651]: E0208 23:37:34.719608 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.798675 env[1405]: time="2024-02-08T23:37:34.798606296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xn925,Uid:9f310efb-23cd-4dae-811c-7af35a331183,Namespace:calico-system,Attempt:0,}" Feb 8 23:37:34.813985 kubelet[2651]: E0208 23:37:34.813959 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.814223 kubelet[2651]: W0208 23:37:34.814191 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.814362 kubelet[2651]: E0208 23:37:34.814349 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.832981 env[1405]: time="2024-02-08T23:37:34.832922966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:37:34.833156 env[1405]: time="2024-02-08T23:37:34.832959267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:37:34.833156 env[1405]: time="2024-02-08T23:37:34.832973567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:37:34.833156 env[1405]: time="2024-02-08T23:37:34.833110668Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b11e086e82c507f7a639307eb18f1a3f317db2558eb07301b1b63d84b6b3842 pid=3161 runtime=io.containerd.runc.v2 Feb 8 23:37:34.886841 env[1405]: time="2024-02-08T23:37:34.886715890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xn925,Uid:9f310efb-23cd-4dae-811c-7af35a331183,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b11e086e82c507f7a639307eb18f1a3f317db2558eb07301b1b63d84b6b3842\"" Feb 8 23:37:34.915040 kubelet[2651]: E0208 23:37:34.915010 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.915040 kubelet[2651]: W0208 23:37:34.915034 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.915251 kubelet[2651]: E0208 23:37:34.915059 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:34.921720 kubelet[2651]: E0208 23:37:34.921684 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:37:34.921720 kubelet[2651]: W0208 23:37:34.921716 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:37:34.921878 kubelet[2651]: E0208 23:37:34.921738 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:37:35.109631 kernel: kauditd_printk_skb: 134 callbacks suppressed Feb 8 23:37:35.109801 kernel: audit: type=1325 audit(1707435455.094:288): table=filter:113 family=2 entries=14 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:37:35.094000 audit[3225]: NETFILTER_CFG table=filter:113 family=2 entries=14 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:37:35.094000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffe95917770 a2=0 a3=7ffe9591775c items=0 ppid=2808 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:35.094000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:37:35.136564 kernel: audit: type=1300 audit(1707435455.094:288): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffe95917770 a2=0 a3=7ffe9591775c items=0 ppid=2808 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:35.136684 kernel: audit: type=1327 audit(1707435455.094:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:37:35.136713 kernel: audit: type=1325 audit(1707435455.094:289): table=nat:114 family=2 entries=20 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:37:35.094000 audit[3225]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:37:35.094000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe95917770 a2=0 a3=7ffe9591775c items=0 ppid=2808 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:35.166760 kernel: audit: type=1300 audit(1707435455.094:289): arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe95917770 a2=0 a3=7ffe9591775c items=0 ppid=2808 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:37:35.094000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:37:35.176736 kernel: audit: type=1327 audit(1707435455.094:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:37:35.663889 kubelet[2651]: E0208 23:37:35.663857 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:37:36.727702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1768168980.mount: Deactivated successfully. Feb 8 23:37:37.663784 kubelet[2651]: E0208 23:37:37.663735 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:37:39.664199 kubelet[2651]: E0208 23:37:39.664154 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:37:41.663629 kubelet[2651]: E0208 23:37:41.663579 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:37:43.663730 kubelet[2651]: E0208 23:37:43.663673 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:37:45.664048 kubelet[2651]: E0208 23:37:45.663995 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:37:47.664569 kubelet[2651]: E0208 23:37:47.664514 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:37:49.664362 kubelet[2651]: E0208 23:37:49.664329 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:37:51.664029 kubelet[2651]: E0208 23:37:51.663994 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:37:53.663628 kubelet[2651]: E0208 23:37:53.663568 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:37:55.664152 kubelet[2651]: E0208 23:37:55.664089 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:37:57.664315 kubelet[2651]: E0208 23:37:57.664255 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:37:59.664064 kubelet[2651]: E0208 23:37:59.664016 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:38:01.664085 kubelet[2651]: E0208 23:38:01.664035 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:38:03.664158 kubelet[2651]: E0208 23:38:03.664107 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:38:05.278870 env[1405]: time="2024-02-08T23:38:05.278821520Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:05.288293 env[1405]: time="2024-02-08T23:38:05.288259566Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:05.296026 env[1405]: time="2024-02-08T23:38:05.295985903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:05.299818 env[1405]: time="2024-02-08T23:38:05.299785722Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:05.300781 env[1405]: time="2024-02-08T23:38:05.300744526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c\"" Feb 8 23:38:05.302396 env[1405]: time="2024-02-08T23:38:05.302355634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 8 23:38:05.319260 env[1405]: time="2024-02-08T23:38:05.319220716Z" level=info msg="CreateContainer within sandbox \"9576b66bde77f5c20b672ac5c8ccfb2a4307dfb3fac0618efbb206f11d3e4041\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 8 23:38:05.354036 env[1405]: time="2024-02-08T23:38:05.353974384Z" level=info msg="CreateContainer within sandbox \"9576b66bde77f5c20b672ac5c8ccfb2a4307dfb3fac0618efbb206f11d3e4041\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2ca1908c105e8f109700dfad48f80b01b5a01a128be1a491a893ccd8860aa8a2\"" Feb 8 23:38:05.354858 env[1405]: time="2024-02-08T23:38:05.354823488Z" level=info msg="StartContainer for \"2ca1908c105e8f109700dfad48f80b01b5a01a128be1a491a893ccd8860aa8a2\"" Feb 8 23:38:05.440330 env[1405]: time="2024-02-08T23:38:05.440269203Z" level=info msg="StartContainer for \"2ca1908c105e8f109700dfad48f80b01b5a01a128be1a491a893ccd8860aa8a2\" returns successfully" Feb 8 23:38:05.664344 kubelet[2651]: E0208 23:38:05.664183 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:38:05.780513 kubelet[2651]: I0208 23:38:05.780475 2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-74794d48cc-5zzf7" podStartSLOduration=-9.22337200507434e+09 pod.CreationTimestamp="2024-02-08 23:37:34 +0000 UTC" firstStartedPulling="2024-02-08 23:37:34.682442282 +0000 UTC m=+18.409204407" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:38:05.773790119 +0000 UTC m=+49.500552244" watchObservedRunningTime="2024-02-08 23:38:05.780434852 +0000 UTC m=+49.507197077" Feb 8 23:38:05.830000 audit[3297]: NETFILTER_CFG table=filter:115 family=2 entries=13 op=nft_register_rule pid=3297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:05.843688 kernel: audit: type=1325 audit(1707435485.830:290): table=filter:115 family=2 entries=13 op=nft_register_rule pid=3297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:05.843794 kernel: audit: type=1300 audit(1707435485.830:290): arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffdb42d24b0 a2=0 a3=7ffdb42d249c items=0 ppid=2808 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:05.830000 audit[3297]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffdb42d24b0 a2=0 a3=7ffdb42d249c items=0 ppid=2808 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:05.857465 kubelet[2651]: E0208 23:38:05.857442 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.857645 kubelet[2651]: W0208 23:38:05.857628 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.857755 kubelet[2651]: E0208 23:38:05.857743 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.858049 kubelet[2651]: E0208 23:38:05.858036 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.858160 kubelet[2651]: W0208 23:38:05.858149 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.858255 kubelet[2651]: E0208 23:38:05.858247 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.858520 kubelet[2651]: E0208 23:38:05.858508 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.858624 kubelet[2651]: W0208 23:38:05.858613 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.858716 kubelet[2651]: E0208 23:38:05.858708 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.859019 kubelet[2651]: E0208 23:38:05.859008 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.859151 kubelet[2651]: W0208 23:38:05.859140 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.859257 kubelet[2651]: E0208 23:38:05.859248 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.859519 kubelet[2651]: E0208 23:38:05.859509 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.859617 kubelet[2651]: W0208 23:38:05.859607 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.859705 kubelet[2651]: E0208 23:38:05.859697 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.859967 kubelet[2651]: E0208 23:38:05.859956 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.860061 kubelet[2651]: W0208 23:38:05.860050 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.860138 kubelet[2651]: E0208 23:38:05.860130 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.860440 kubelet[2651]: E0208 23:38:05.860429 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.860538 kubelet[2651]: W0208 23:38:05.860527 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.860612 kubelet[2651]: E0208 23:38:05.860602 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.860688 kernel: audit: type=1327 audit(1707435485.830:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:05.830000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:05.860966 kubelet[2651]: E0208 23:38:05.860956 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.861051 kubelet[2651]: W0208 23:38:05.861042 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.861115 kubelet[2651]: E0208 23:38:05.861107 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.861304 kubelet[2651]: E0208 23:38:05.861295 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.861373 kubelet[2651]: W0208 23:38:05.861365 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.861428 kubelet[2651]: E0208 23:38:05.861421 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.861634 kubelet[2651]: E0208 23:38:05.861625 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.861725 kubelet[2651]: W0208 23:38:05.861716 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.861785 kubelet[2651]: E0208 23:38:05.861777 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.861980 kubelet[2651]: E0208 23:38:05.861972 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.862037 kubelet[2651]: W0208 23:38:05.862029 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.862115 kubelet[2651]: E0208 23:38:05.862084 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.862319 kubelet[2651]: E0208 23:38:05.862310 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.862379 kubelet[2651]: W0208 23:38:05.862370 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.862433 kubelet[2651]: E0208 23:38:05.862427 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.831000 audit[3297]: NETFILTER_CFG table=nat:116 family=2 entries=27 op=nft_register_chain pid=3297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:05.880331 kernel: audit: type=1325 audit(1707435485.831:291): table=nat:116 family=2 entries=27 op=nft_register_chain pid=3297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:05.880403 kernel: audit: type=1300 audit(1707435485.831:291): arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffdb42d24b0 a2=0 a3=7ffdb42d249c items=0 ppid=2808 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:05.831000 audit[3297]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffdb42d24b0 a2=0 a3=7ffdb42d249c items=0 ppid=2808 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:05.831000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:05.900290 kernel: audit: type=1327 audit(1707435485.831:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:05.928398 kubelet[2651]: E0208 23:38:05.928326 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.928398 kubelet[2651]: W0208 23:38:05.928347 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.928398 kubelet[2651]: E0208 23:38:05.928369 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.931274 kubelet[2651]: E0208 23:38:05.928752 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.931274 kubelet[2651]: W0208 23:38:05.928767 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.931274 kubelet[2651]: E0208 23:38:05.928786 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.931274 kubelet[2651]: E0208 23:38:05.929056 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.931274 kubelet[2651]: W0208 23:38:05.929073 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.931274 kubelet[2651]: E0208 23:38:05.929093 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.931274 kubelet[2651]: E0208 23:38:05.929304 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.931274 kubelet[2651]: W0208 23:38:05.929313 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.931274 kubelet[2651]: E0208 23:38:05.929330 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.931274 kubelet[2651]: E0208 23:38:05.929521 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.931790 kubelet[2651]: W0208 23:38:05.929530 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.931790 kubelet[2651]: E0208 23:38:05.929553 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.931790 kubelet[2651]: E0208 23:38:05.929770 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.931790 kubelet[2651]: W0208 23:38:05.929780 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.931790 kubelet[2651]: E0208 23:38:05.929853 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.931790 kubelet[2651]: E0208 23:38:05.930015 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.931790 kubelet[2651]: W0208 23:38:05.930023 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.931790 kubelet[2651]: E0208 23:38:05.930093 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.931790 kubelet[2651]: E0208 23:38:05.930266 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.931790 kubelet[2651]: W0208 23:38:05.930275 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.932214 kubelet[2651]: E0208 23:38:05.930335 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.932214 kubelet[2651]: E0208 23:38:05.930561 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.932214 kubelet[2651]: W0208 23:38:05.930591 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.932214 kubelet[2651]: E0208 23:38:05.930615 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.932214 kubelet[2651]: E0208 23:38:05.931329 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.932214 kubelet[2651]: W0208 23:38:05.931341 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.932214 kubelet[2651]: E0208 23:38:05.931360 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.932214 kubelet[2651]: E0208 23:38:05.931558 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.932214 kubelet[2651]: W0208 23:38:05.931567 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.932214 kubelet[2651]: E0208 23:38:05.931640 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.932748 kubelet[2651]: E0208 23:38:05.931799 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.932748 kubelet[2651]: W0208 23:38:05.931885 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.932748 kubelet[2651]: E0208 23:38:05.931967 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.932748 kubelet[2651]: E0208 23:38:05.932107 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.932748 kubelet[2651]: W0208 23:38:05.932116 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.932748 kubelet[2651]: E0208 23:38:05.932133 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.932748 kubelet[2651]: E0208 23:38:05.932323 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.932748 kubelet[2651]: W0208 23:38:05.932333 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.932748 kubelet[2651]: E0208 23:38:05.932351 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.933368 kubelet[2651]: E0208 23:38:05.933228 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.933368 kubelet[2651]: W0208 23:38:05.933240 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.933368 kubelet[2651]: E0208 23:38:05.933319 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.933706 kubelet[2651]: E0208 23:38:05.933688 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.933706 kubelet[2651]: W0208 23:38:05.933702 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.934570 kubelet[2651]: E0208 23:38:05.933719 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.934570 kubelet[2651]: E0208 23:38:05.933931 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.934570 kubelet[2651]: W0208 23:38:05.933941 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.934570 kubelet[2651]: E0208 23:38:05.933956 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:05.934570 kubelet[2651]: E0208 23:38:05.934279 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:05.934570 kubelet[2651]: W0208 23:38:05.934297 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:05.934570 kubelet[2651]: E0208 23:38:05.934313 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.768166 kubelet[2651]: E0208 23:38:06.768138 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.768166 kubelet[2651]: W0208 23:38:06.768156 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.768764 kubelet[2651]: E0208 23:38:06.768178 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.768764 kubelet[2651]: E0208 23:38:06.768396 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.768764 kubelet[2651]: W0208 23:38:06.768405 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.768764 kubelet[2651]: E0208 23:38:06.768421 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.769992 kubelet[2651]: E0208 23:38:06.769260 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.769992 kubelet[2651]: W0208 23:38:06.769275 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.769992 kubelet[2651]: E0208 23:38:06.769303 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.769992 kubelet[2651]: E0208 23:38:06.769536 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.769992 kubelet[2651]: W0208 23:38:06.769546 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.769992 kubelet[2651]: E0208 23:38:06.769561 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.770374 kubelet[2651]: E0208 23:38:06.770354 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.770374 kubelet[2651]: W0208 23:38:06.770369 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.770511 kubelet[2651]: E0208 23:38:06.770386 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.770581 kubelet[2651]: E0208 23:38:06.770563 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.770581 kubelet[2651]: W0208 23:38:06.770578 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.770778 kubelet[2651]: E0208 23:38:06.770595 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.770872 kubelet[2651]: E0208 23:38:06.770854 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.770872 kubelet[2651]: W0208 23:38:06.770868 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.770971 kubelet[2651]: E0208 23:38:06.770884 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.772199 kubelet[2651]: E0208 23:38:06.771478 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.772199 kubelet[2651]: W0208 23:38:06.771492 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.772199 kubelet[2651]: E0208 23:38:06.771511 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.772199 kubelet[2651]: E0208 23:38:06.771716 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.772199 kubelet[2651]: W0208 23:38:06.771727 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.772199 kubelet[2651]: E0208 23:38:06.771742 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.772977 kubelet[2651]: E0208 23:38:06.772961 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.772977 kubelet[2651]: W0208 23:38:06.772974 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.773155 kubelet[2651]: E0208 23:38:06.772990 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.773212 kubelet[2651]: E0208 23:38:06.773176 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.773212 kubelet[2651]: W0208 23:38:06.773186 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.773212 kubelet[2651]: E0208 23:38:06.773201 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.773373 kubelet[2651]: E0208 23:38:06.773356 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.773373 kubelet[2651]: W0208 23:38:06.773367 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.773601 kubelet[2651]: E0208 23:38:06.773382 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.834022 kubelet[2651]: E0208 23:38:06.833999 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.834234 kubelet[2651]: W0208 23:38:06.834201 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.834389 kubelet[2651]: E0208 23:38:06.834376 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.834738 kubelet[2651]: E0208 23:38:06.834725 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.834861 kubelet[2651]: W0208 23:38:06.834836 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.834931 kubelet[2651]: E0208 23:38:06.834864 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.835117 kubelet[2651]: E0208 23:38:06.835100 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.835117 kubelet[2651]: W0208 23:38:06.835113 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.835237 kubelet[2651]: E0208 23:38:06.835133 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.835377 kubelet[2651]: E0208 23:38:06.835362 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.835377 kubelet[2651]: W0208 23:38:06.835374 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.835503 kubelet[2651]: E0208 23:38:06.835397 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.835626 kubelet[2651]: E0208 23:38:06.835611 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.835626 kubelet[2651]: W0208 23:38:06.835623 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.835779 kubelet[2651]: E0208 23:38:06.835642 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.835849 kubelet[2651]: E0208 23:38:06.835836 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.835915 kubelet[2651]: W0208 23:38:06.835851 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.835915 kubelet[2651]: E0208 23:38:06.835869 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.836092 kubelet[2651]: E0208 23:38:06.836077 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.836092 kubelet[2651]: W0208 23:38:06.836091 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.836215 kubelet[2651]: E0208 23:38:06.836177 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.836472 kubelet[2651]: E0208 23:38:06.836457 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.836472 kubelet[2651]: W0208 23:38:06.836469 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.836603 kubelet[2651]: E0208 23:38:06.836550 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.836712 kubelet[2651]: E0208 23:38:06.836698 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.836712 kubelet[2651]: W0208 23:38:06.836709 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.836840 kubelet[2651]: E0208 23:38:06.836789 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.836930 kubelet[2651]: E0208 23:38:06.836916 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.836930 kubelet[2651]: W0208 23:38:06.836927 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.837032 kubelet[2651]: E0208 23:38:06.836945 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.837149 kubelet[2651]: E0208 23:38:06.837134 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.837149 kubelet[2651]: W0208 23:38:06.837146 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.837280 kubelet[2651]: E0208 23:38:06.837166 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.837386 kubelet[2651]: E0208 23:38:06.837371 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.837386 kubelet[2651]: W0208 23:38:06.837383 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.837507 kubelet[2651]: E0208 23:38:06.837402 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.837606 kubelet[2651]: E0208 23:38:06.837593 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.837606 kubelet[2651]: W0208 23:38:06.837604 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.837757 kubelet[2651]: E0208 23:38:06.837623 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.838030 kubelet[2651]: E0208 23:38:06.838017 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.838120 kubelet[2651]: W0208 23:38:06.838096 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.838184 kubelet[2651]: E0208 23:38:06.838123 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.838321 kubelet[2651]: E0208 23:38:06.838302 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.838393 kubelet[2651]: W0208 23:38:06.838376 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.838460 kubelet[2651]: E0208 23:38:06.838400 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.838613 kubelet[2651]: E0208 23:38:06.838598 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.838613 kubelet[2651]: W0208 23:38:06.838611 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.838787 kubelet[2651]: E0208 23:38:06.838630 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.838976 kubelet[2651]: E0208 23:38:06.838961 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.838976 kubelet[2651]: W0208 23:38:06.838973 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.839105 kubelet[2651]: E0208 23:38:06.838992 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:06.839191 kubelet[2651]: E0208 23:38:06.839175 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:06.839191 kubelet[2651]: W0208 23:38:06.839188 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:06.839265 kubelet[2651]: E0208 23:38:06.839202 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.664437 kubelet[2651]: E0208 23:38:07.664406 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:38:07.779294 kubelet[2651]: E0208 23:38:07.779264 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.779294 kubelet[2651]: W0208 23:38:07.779284 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.779294 kubelet[2651]: E0208 23:38:07.779308 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.779954 kubelet[2651]: E0208 23:38:07.779524 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.779954 kubelet[2651]: W0208 23:38:07.779534 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.779954 kubelet[2651]: E0208 23:38:07.779549 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.779954 kubelet[2651]: E0208 23:38:07.779749 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.779954 kubelet[2651]: W0208 23:38:07.779758 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.779954 kubelet[2651]: E0208 23:38:07.779772 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.779954 kubelet[2651]: E0208 23:38:07.779958 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.780283 kubelet[2651]: W0208 23:38:07.779967 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.780283 kubelet[2651]: E0208 23:38:07.779981 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.780283 kubelet[2651]: E0208 23:38:07.780132 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.780283 kubelet[2651]: W0208 23:38:07.780141 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.780283 kubelet[2651]: E0208 23:38:07.780154 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.780507 kubelet[2651]: E0208 23:38:07.780294 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.780507 kubelet[2651]: W0208 23:38:07.780303 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.780507 kubelet[2651]: E0208 23:38:07.780316 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.780638 kubelet[2651]: E0208 23:38:07.780523 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.780638 kubelet[2651]: W0208 23:38:07.780532 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.780638 kubelet[2651]: E0208 23:38:07.780545 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.781196 kubelet[2651]: E0208 23:38:07.781177 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.781196 kubelet[2651]: W0208 23:38:07.781193 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.781349 kubelet[2651]: E0208 23:38:07.781209 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.781518 kubelet[2651]: E0208 23:38:07.781449 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.781937 kubelet[2651]: W0208 23:38:07.781467 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.782032 kubelet[2651]: E0208 23:38:07.781947 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.782573 kubelet[2651]: E0208 23:38:07.782555 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.782573 kubelet[2651]: W0208 23:38:07.782568 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.782753 kubelet[2651]: E0208 23:38:07.782584 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.782807 kubelet[2651]: E0208 23:38:07.782781 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.782807 kubelet[2651]: W0208 23:38:07.782792 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.782807 kubelet[2651]: E0208 23:38:07.782806 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.782976 kubelet[2651]: E0208 23:38:07.782959 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.783029 kubelet[2651]: W0208 23:38:07.782976 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.783029 kubelet[2651]: E0208 23:38:07.782990 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.843688 kubelet[2651]: E0208 23:38:07.843642 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.843688 kubelet[2651]: W0208 23:38:07.843680 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.843945 kubelet[2651]: E0208 23:38:07.843704 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.844019 kubelet[2651]: E0208 23:38:07.844001 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.844019 kubelet[2651]: W0208 23:38:07.844017 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.844116 kubelet[2651]: E0208 23:38:07.844038 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.844279 kubelet[2651]: E0208 23:38:07.844264 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.844279 kubelet[2651]: W0208 23:38:07.844276 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.844418 kubelet[2651]: E0208 23:38:07.844295 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.844521 kubelet[2651]: E0208 23:38:07.844507 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.844575 kubelet[2651]: W0208 23:38:07.844521 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.844575 kubelet[2651]: E0208 23:38:07.844540 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.844786 kubelet[2651]: E0208 23:38:07.844770 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.844786 kubelet[2651]: W0208 23:38:07.844783 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.844937 kubelet[2651]: E0208 23:38:07.844803 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.845025 kubelet[2651]: E0208 23:38:07.845011 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.845025 kubelet[2651]: W0208 23:38:07.845023 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.845123 kubelet[2651]: E0208 23:38:07.845103 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.845278 kubelet[2651]: E0208 23:38:07.845263 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.845278 kubelet[2651]: W0208 23:38:07.845275 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.845476 kubelet[2651]: E0208 23:38:07.845452 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.845476 kubelet[2651]: W0208 23:38:07.845461 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.845593 kubelet[2651]: E0208 23:38:07.845570 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.845685 kubelet[2651]: E0208 23:38:07.845655 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.845827 kubelet[2651]: E0208 23:38:07.845812 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.845827 kubelet[2651]: W0208 23:38:07.845824 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.845934 kubelet[2651]: E0208 23:38:07.845843 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.846175 kubelet[2651]: E0208 23:38:07.846159 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.846175 kubelet[2651]: W0208 23:38:07.846171 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.846315 kubelet[2651]: E0208 23:38:07.846191 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.846405 kubelet[2651]: E0208 23:38:07.846392 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.846405 kubelet[2651]: W0208 23:38:07.846403 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.846509 kubelet[2651]: E0208 23:38:07.846422 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.846706 kubelet[2651]: E0208 23:38:07.846692 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.846792 kubelet[2651]: W0208 23:38:07.846771 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.846846 kubelet[2651]: E0208 23:38:07.846796 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.847129 kubelet[2651]: E0208 23:38:07.847117 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.847222 kubelet[2651]: W0208 23:38:07.847198 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.847283 kubelet[2651]: E0208 23:38:07.847224 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.847457 kubelet[2651]: E0208 23:38:07.847442 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.847457 kubelet[2651]: W0208 23:38:07.847456 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.847614 kubelet[2651]: E0208 23:38:07.847601 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.847736 kubelet[2651]: E0208 23:38:07.847622 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.847836 kubelet[2651]: W0208 23:38:07.847734 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.847920 kubelet[2651]: E0208 23:38:07.847905 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.847970 kubelet[2651]: W0208 23:38:07.847927 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.847970 kubelet[2651]: E0208 23:38:07.847942 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.848071 kubelet[2651]: E0208 23:38:07.847906 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.848204 kubelet[2651]: E0208 23:38:07.848139 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.848271 kubelet[2651]: W0208 23:38:07.848203 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.848271 kubelet[2651]: E0208 23:38:07.848219 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:07.848737 kubelet[2651]: E0208 23:38:07.848719 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 8 23:38:07.848737 kubelet[2651]: W0208 23:38:07.848732 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 8 23:38:07.848849 kubelet[2651]: E0208 23:38:07.848747 2651 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 8 23:38:09.663746 kubelet[2651]: E0208 23:38:09.663709 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:38:09.977163 env[1405]: time="2024-02-08T23:38:09.977116253Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:09.982690 env[1405]: time="2024-02-08T23:38:09.982643879Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:09.986489 env[1405]: time="2024-02-08T23:38:09.986458197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:09.991687 env[1405]: time="2024-02-08T23:38:09.991648121Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:09.992525 env[1405]: time="2024-02-08T23:38:09.992492925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 8 23:38:09.995969 env[1405]: time="2024-02-08T23:38:09.995925941Z" level=info msg="CreateContainer within sandbox \"7b11e086e82c507f7a639307eb18f1a3f317db2558eb07301b1b63d84b6b3842\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 8 23:38:10.019979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount746642769.mount: Deactivated successfully. Feb 8 23:38:10.031516 env[1405]: time="2024-02-08T23:38:10.031483404Z" level=info msg="CreateContainer within sandbox \"7b11e086e82c507f7a639307eb18f1a3f317db2558eb07301b1b63d84b6b3842\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ae33bd27a27759c3777442ec4672f9b7a21a8a7027656374247252c98c4d3d55\"" Feb 8 23:38:10.033280 env[1405]: time="2024-02-08T23:38:10.033251512Z" level=info msg="StartContainer for \"ae33bd27a27759c3777442ec4672f9b7a21a8a7027656374247252c98c4d3d55\"" Feb 8 23:38:10.097716 env[1405]: time="2024-02-08T23:38:10.097604807Z" level=info msg="StartContainer for \"ae33bd27a27759c3777442ec4672f9b7a21a8a7027656374247252c98c4d3d55\" returns successfully" Feb 8 23:38:10.858796 env[1405]: time="2024-02-08T23:38:10.858558794Z" level=info msg="shim disconnected" id=ae33bd27a27759c3777442ec4672f9b7a21a8a7027656374247252c98c4d3d55 Feb 8 23:38:10.858796 env[1405]: time="2024-02-08T23:38:10.858612294Z" level=warning msg="cleaning up after shim disconnected" id=ae33bd27a27759c3777442ec4672f9b7a21a8a7027656374247252c98c4d3d55 namespace=k8s.io Feb 8 23:38:10.858796 env[1405]: time="2024-02-08T23:38:10.858640394Z" level=info msg="cleaning up dead shim" Feb 8 23:38:10.866801 env[1405]: time="2024-02-08T23:38:10.866764232Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:38:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3442 runtime=io.containerd.runc.v2\n" Feb 8 23:38:11.014701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae33bd27a27759c3777442ec4672f9b7a21a8a7027656374247252c98c4d3d55-rootfs.mount: Deactivated successfully. Feb 8 23:38:11.664528 kubelet[2651]: E0208 23:38:11.664494 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:38:11.779540 env[1405]: time="2024-02-08T23:38:11.779484077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 8 23:38:13.664264 kubelet[2651]: E0208 23:38:13.664216 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:38:15.664411 kubelet[2651]: E0208 23:38:15.664288 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:38:17.664011 kubelet[2651]: E0208 23:38:17.663967 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:38:18.978634 env[1405]: time="2024-02-08T23:38:18.978584403Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:18.983441 env[1405]: time="2024-02-08T23:38:18.983404324Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:18.993756 env[1405]: time="2024-02-08T23:38:18.993727468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:18.996764 env[1405]: time="2024-02-08T23:38:18.996735280Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:18.997444 env[1405]: time="2024-02-08T23:38:18.997414883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 8 23:38:19.009165 env[1405]: time="2024-02-08T23:38:19.009130832Z" level=info msg="CreateContainer within sandbox \"7b11e086e82c507f7a639307eb18f1a3f317db2558eb07301b1b63d84b6b3842\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 8 23:38:19.029862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount717725298.mount: Deactivated successfully. Feb 8 23:38:19.042956 env[1405]: time="2024-02-08T23:38:19.042920974Z" level=info msg="CreateContainer within sandbox \"7b11e086e82c507f7a639307eb18f1a3f317db2558eb07301b1b63d84b6b3842\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d05b6839eed4155183ec4e73f053b3b3021a463aeb675c65c273d05919847854\"" Feb 8 23:38:19.044738 env[1405]: time="2024-02-08T23:38:19.044706482Z" level=info msg="StartContainer for \"d05b6839eed4155183ec4e73f053b3b3021a463aeb675c65c273d05919847854\"" Feb 8 23:38:19.103217 env[1405]: time="2024-02-08T23:38:19.103130727Z" level=info msg="StartContainer for \"d05b6839eed4155183ec4e73f053b3b3021a463aeb675c65c273d05919847854\" returns successfully" Feb 8 23:38:19.664258 kubelet[2651]: E0208 23:38:19.664222 2651 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:38:20.791386 env[1405]: time="2024-02-08T23:38:20.791307490Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:38:20.814302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d05b6839eed4155183ec4e73f053b3b3021a463aeb675c65c273d05919847854-rootfs.mount: Deactivated successfully. Feb 8 23:38:20.845089 kubelet[2651]: I0208 23:38:20.843626 2651 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:38:20.861507 kubelet[2651]: I0208 23:38:20.861468 2651 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:38:20.868822 kubelet[2651]: I0208 23:38:20.868794 2651 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:38:20.869263 kubelet[2651]: I0208 23:38:20.869241 2651 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:38:20.938578 kubelet[2651]: I0208 23:38:20.938539 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z7pw\" (UniqueName: \"kubernetes.io/projected/f03ff7cf-0bee-448e-9a60-80431e41383c-kube-api-access-2z7pw\") pod \"coredns-787d4945fb-7nz52\" (UID: \"f03ff7cf-0bee-448e-9a60-80431e41383c\") " pod="kube-system/coredns-787d4945fb-7nz52" Feb 8 23:38:20.938813 kubelet[2651]: I0208 23:38:20.938620 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d47f938b-76c5-40ea-8321-fd8530afd202-config-volume\") pod \"coredns-787d4945fb-r5zfj\" (UID: \"d47f938b-76c5-40ea-8321-fd8530afd202\") " pod="kube-system/coredns-787d4945fb-r5zfj" Feb 8 23:38:20.938813 kubelet[2651]: I0208 23:38:20.938691 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f03ff7cf-0bee-448e-9a60-80431e41383c-config-volume\") pod \"coredns-787d4945fb-7nz52\" (UID: \"f03ff7cf-0bee-448e-9a60-80431e41383c\") " pod="kube-system/coredns-787d4945fb-7nz52" Feb 8 23:38:20.938813 kubelet[2651]: I0208 23:38:20.938731 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5lch\" (UniqueName: \"kubernetes.io/projected/c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b-kube-api-access-c5lch\") pod \"calico-kube-controllers-868b7ffccf-pz49r\" (UID: \"c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b\") " pod="calico-system/calico-kube-controllers-868b7ffccf-pz49r" Feb 8 23:38:20.938813 kubelet[2651]: I0208 23:38:20.938771 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b-tigera-ca-bundle\") pod \"calico-kube-controllers-868b7ffccf-pz49r\" (UID: \"c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b\") " pod="calico-system/calico-kube-controllers-868b7ffccf-pz49r" Feb 8 23:38:20.939049 kubelet[2651]: I0208 23:38:20.938827 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnw4l\" (UniqueName: \"kubernetes.io/projected/d47f938b-76c5-40ea-8321-fd8530afd202-kube-api-access-mnw4l\") pod \"coredns-787d4945fb-r5zfj\" (UID: \"d47f938b-76c5-40ea-8321-fd8530afd202\") " pod="kube-system/coredns-787d4945fb-r5zfj" Feb 8 23:38:21.171064 env[1405]: time="2024-02-08T23:38:21.170927965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-r5zfj,Uid:d47f938b-76c5-40ea-8321-fd8530afd202,Namespace:kube-system,Attempt:0,}" Feb 8 23:38:21.172459 env[1405]: time="2024-02-08T23:38:21.172415071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-868b7ffccf-pz49r,Uid:c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b,Namespace:calico-system,Attempt:0,}" Feb 8 23:38:21.177302 env[1405]: time="2024-02-08T23:38:21.177272891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-7nz52,Uid:f03ff7cf-0bee-448e-9a60-80431e41383c,Namespace:kube-system,Attempt:0,}" Feb 8 23:38:22.446335 env[1405]: time="2024-02-08T23:38:22.446284316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pfb4q,Uid:b23333af-8873-429e-8aa7-941ea237b3cf,Namespace:calico-system,Attempt:0,}" Feb 8 23:38:22.461375 env[1405]: time="2024-02-08T23:38:22.461331678Z" level=info msg="shim disconnected" id=d05b6839eed4155183ec4e73f053b3b3021a463aeb675c65c273d05919847854 Feb 8 23:38:22.461502 env[1405]: time="2024-02-08T23:38:22.461379778Z" level=warning msg="cleaning up after shim disconnected" id=d05b6839eed4155183ec4e73f053b3b3021a463aeb675c65c273d05919847854 namespace=k8s.io Feb 8 23:38:22.461502 env[1405]: time="2024-02-08T23:38:22.461391278Z" level=info msg="cleaning up dead shim" Feb 8 23:38:22.469569 env[1405]: time="2024-02-08T23:38:22.469537611Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:38:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3518 runtime=io.containerd.runc.v2\n" Feb 8 23:38:22.694063 env[1405]: time="2024-02-08T23:38:22.693991330Z" level=error msg="Failed to destroy network for sandbox \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.694434 env[1405]: time="2024-02-08T23:38:22.694388232Z" level=error msg="encountered an error cleaning up failed sandbox \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.694541 env[1405]: time="2024-02-08T23:38:22.694457132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-r5zfj,Uid:d47f938b-76c5-40ea-8321-fd8530afd202,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.694739 kubelet[2651]: E0208 23:38:22.694708 2651 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.695157 kubelet[2651]: E0208 23:38:22.694785 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-r5zfj" Feb 8 23:38:22.695157 kubelet[2651]: E0208 23:38:22.694836 2651 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-r5zfj" Feb 8 23:38:22.695157 kubelet[2651]: E0208 23:38:22.694929 2651 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-r5zfj_kube-system(d47f938b-76c5-40ea-8321-fd8530afd202)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-r5zfj_kube-system(d47f938b-76c5-40ea-8321-fd8530afd202)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-r5zfj" podUID=d47f938b-76c5-40ea-8321-fd8530afd202 Feb 8 23:38:22.698049 env[1405]: time="2024-02-08T23:38:22.697259044Z" level=error msg="Failed to destroy network for sandbox \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.698049 env[1405]: time="2024-02-08T23:38:22.697733746Z" level=error msg="encountered an error cleaning up failed sandbox \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.698495 env[1405]: time="2024-02-08T23:38:22.698435349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-7nz52,Uid:f03ff7cf-0bee-448e-9a60-80431e41383c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.698682 kubelet[2651]: E0208 23:38:22.698645 2651 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.698779 kubelet[2651]: E0208 23:38:22.698766 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-7nz52" Feb 8 23:38:22.698838 kubelet[2651]: E0208 23:38:22.698815 2651 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-7nz52" Feb 8 23:38:22.698914 kubelet[2651]: E0208 23:38:22.698897 2651 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-7nz52_kube-system(f03ff7cf-0bee-448e-9a60-80431e41383c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-7nz52_kube-system(f03ff7cf-0bee-448e-9a60-80431e41383c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-7nz52" podUID=f03ff7cf-0bee-448e-9a60-80431e41383c Feb 8 23:38:22.713558 env[1405]: time="2024-02-08T23:38:22.713517510Z" level=error msg="Failed to destroy network for sandbox \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.713843 env[1405]: time="2024-02-08T23:38:22.713809712Z" level=error msg="encountered an error cleaning up failed sandbox \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.713913 env[1405]: time="2024-02-08T23:38:22.713862512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-868b7ffccf-pz49r,Uid:c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.714072 kubelet[2651]: E0208 23:38:22.714051 2651 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.714153 kubelet[2651]: E0208 23:38:22.714100 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-868b7ffccf-pz49r" Feb 8 23:38:22.714153 kubelet[2651]: E0208 23:38:22.714144 2651 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-868b7ffccf-pz49r" Feb 8 23:38:22.714246 kubelet[2651]: E0208 23:38:22.714223 2651 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-868b7ffccf-pz49r_calico-system(c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-868b7ffccf-pz49r_calico-system(c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-868b7ffccf-pz49r" podUID=c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b Feb 8 23:38:22.725867 env[1405]: time="2024-02-08T23:38:22.725825861Z" level=error msg="Failed to destroy network for sandbox \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.726167 env[1405]: time="2024-02-08T23:38:22.726134662Z" level=error msg="encountered an error cleaning up failed sandbox \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.726253 env[1405]: time="2024-02-08T23:38:22.726184862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pfb4q,Uid:b23333af-8873-429e-8aa7-941ea237b3cf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.726393 kubelet[2651]: E0208 23:38:22.726369 2651 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.726481 kubelet[2651]: E0208 23:38:22.726426 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pfb4q" Feb 8 23:38:22.726481 kubelet[2651]: E0208 23:38:22.726457 2651 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pfb4q" Feb 8 23:38:22.726577 kubelet[2651]: E0208 23:38:22.726514 2651 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pfb4q_calico-system(b23333af-8873-429e-8aa7-941ea237b3cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pfb4q_calico-system(b23333af-8873-429e-8aa7-941ea237b3cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:38:22.803293 env[1405]: time="2024-02-08T23:38:22.803242178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 8 23:38:22.804802 kubelet[2651]: I0208 23:38:22.804300 2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Feb 8 23:38:22.805292 env[1405]: time="2024-02-08T23:38:22.805258786Z" level=info msg="StopPodSandbox for \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\"" Feb 8 23:38:22.817242 kubelet[2651]: I0208 23:38:22.811306 2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Feb 8 23:38:22.817242 kubelet[2651]: I0208 23:38:22.816055 2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Feb 8 23:38:22.817466 env[1405]: time="2024-02-08T23:38:22.811914513Z" level=info msg="StopPodSandbox for \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\"" Feb 8 23:38:22.822616 env[1405]: time="2024-02-08T23:38:22.822582257Z" level=info msg="StopPodSandbox for \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\"" Feb 8 23:38:22.824096 kubelet[2651]: I0208 23:38:22.824074 2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Feb 8 23:38:22.824685 env[1405]: time="2024-02-08T23:38:22.824636565Z" level=info msg="StopPodSandbox for \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\"" Feb 8 23:38:22.894817 env[1405]: time="2024-02-08T23:38:22.894759753Z" level=error msg="StopPodSandbox for \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\" failed" error="failed to destroy network for sandbox \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.895038 kubelet[2651]: E0208 23:38:22.895018 2651 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Feb 8 23:38:22.895134 kubelet[2651]: E0208 23:38:22.895082 2651 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d} Feb 8 23:38:22.895134 kubelet[2651]: E0208 23:38:22.895128 2651 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 8 23:38:22.895272 kubelet[2651]: E0208 23:38:22.895180 2651 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-868b7ffccf-pz49r" podUID=c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b Feb 8 23:38:22.902887 env[1405]: time="2024-02-08T23:38:22.902839686Z" level=error msg="StopPodSandbox for \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\" failed" error="failed to destroy network for sandbox \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.903049 kubelet[2651]: E0208 23:38:22.903030 2651 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Feb 8 23:38:22.903137 kubelet[2651]: E0208 23:38:22.903071 2651 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968} Feb 8 23:38:22.903137 kubelet[2651]: E0208 23:38:22.903119 2651 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d47f938b-76c5-40ea-8321-fd8530afd202\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 8 23:38:22.903267 kubelet[2651]: E0208 23:38:22.903155 2651 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d47f938b-76c5-40ea-8321-fd8530afd202\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-r5zfj" podUID=d47f938b-76c5-40ea-8321-fd8530afd202 Feb 8 23:38:22.909655 env[1405]: time="2024-02-08T23:38:22.909603713Z" level=error msg="StopPodSandbox for \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\" failed" error="failed to destroy network for sandbox \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.910196 kubelet[2651]: E0208 23:38:22.910031 2651 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Feb 8 23:38:22.910196 kubelet[2651]: E0208 23:38:22.910071 2651 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9} Feb 8 23:38:22.910196 kubelet[2651]: E0208 23:38:22.910125 2651 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b23333af-8873-429e-8aa7-941ea237b3cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 8 23:38:22.910196 kubelet[2651]: E0208 23:38:22.910174 2651 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b23333af-8873-429e-8aa7-941ea237b3cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pfb4q" podUID=b23333af-8873-429e-8aa7-941ea237b3cf Feb 8 23:38:22.912721 env[1405]: time="2024-02-08T23:38:22.912678726Z" level=error msg="StopPodSandbox for \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\" failed" error="failed to destroy network for sandbox \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 8 23:38:22.912871 kubelet[2651]: E0208 23:38:22.912853 2651 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Feb 8 23:38:22.912958 kubelet[2651]: E0208 23:38:22.912884 2651 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431} Feb 8 23:38:22.912958 kubelet[2651]: E0208 23:38:22.912927 2651 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f03ff7cf-0bee-448e-9a60-80431e41383c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 8 23:38:22.913066 kubelet[2651]: E0208 23:38:22.912963 2651 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f03ff7cf-0bee-448e-9a60-80431e41383c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-7nz52" podUID=f03ff7cf-0bee-448e-9a60-80431e41383c Feb 8 23:38:23.527874 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431-shm.mount: Deactivated successfully. Feb 8 23:38:23.528036 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d-shm.mount: Deactivated successfully. Feb 8 23:38:23.528162 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968-shm.mount: Deactivated successfully. Feb 8 23:38:31.266398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2501079646.mount: Deactivated successfully. Feb 8 23:38:31.353560 env[1405]: time="2024-02-08T23:38:31.353507802Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:31.358621 env[1405]: time="2024-02-08T23:38:31.358578722Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:31.362736 env[1405]: time="2024-02-08T23:38:31.362707438Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:31.365398 env[1405]: time="2024-02-08T23:38:31.365370248Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:31.365829 env[1405]: time="2024-02-08T23:38:31.365799549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 8 23:38:31.383450 env[1405]: time="2024-02-08T23:38:31.379988804Z" level=info msg="CreateContainer within sandbox \"7b11e086e82c507f7a639307eb18f1a3f317db2558eb07301b1b63d84b6b3842\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 8 23:38:31.409553 env[1405]: time="2024-02-08T23:38:31.409517317Z" level=info msg="CreateContainer within sandbox \"7b11e086e82c507f7a639307eb18f1a3f317db2558eb07301b1b63d84b6b3842\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a1dc31ac7ad67788dcc47efdfc725038082aa67b9f2b203c8fd82332a70da0ac\"" Feb 8 23:38:31.410171 env[1405]: time="2024-02-08T23:38:31.410140819Z" level=info msg="StartContainer for \"a1dc31ac7ad67788dcc47efdfc725038082aa67b9f2b203c8fd82332a70da0ac\"" Feb 8 23:38:31.468881 env[1405]: time="2024-02-08T23:38:31.468833944Z" level=info msg="StartContainer for \"a1dc31ac7ad67788dcc47efdfc725038082aa67b9f2b203c8fd82332a70da0ac\" returns successfully" Feb 8 23:38:31.852003 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 8 23:38:31.852153 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Feb 8 23:38:31.872968 kubelet[2651]: I0208 23:38:31.872928 2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-xn925" podStartSLOduration=-9.223371978981909e+09 pod.CreationTimestamp="2024-02-08 23:37:34 +0000 UTC" firstStartedPulling="2024-02-08 23:37:34.888272102 +0000 UTC m=+18.615034327" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:38:31.867791574 +0000 UTC m=+75.594553699" watchObservedRunningTime="2024-02-08 23:38:31.872867894 +0000 UTC m=+75.599630119" Feb 8 23:38:33.212000 audit[3892]: AVC avc: denied { write } for pid=3892 comm="tee" name="fd" dev="proc" ino=32725 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:38:33.227686 kernel: audit: type=1400 audit(1707435513.212:292): avc: denied { write } for pid=3892 comm="tee" name="fd" dev="proc" ino=32725 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:38:33.212000 audit[3892]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd16466975 a2=241 a3=1b6 items=1 ppid=3863 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.255690 kernel: audit: type=1300 audit(1707435513.212:292): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd16466975 a2=241 a3=1b6 items=1 ppid=3863 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.212000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 8 23:38:33.212000 audit: PATH item=0 name="/dev/fd/63" inode=33148 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:33.274571 kernel: audit: type=1307 audit(1707435513.212:292): cwd="/etc/service/enabled/bird6/log" Feb 8 23:38:33.274689 kernel: audit: type=1302 audit(1707435513.212:292): item=0 name="/dev/fd/63" inode=33148 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:33.212000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:38:33.287685 kernel: audit: type=1327 audit(1707435513.212:292): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:38:33.231000 audit[3900]: AVC avc: denied { write } for pid=3900 comm="tee" name="fd" dev="proc" ino=33168 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:38:33.301694 kernel: audit: type=1400 audit(1707435513.231:293): avc: denied { write } for pid=3900 comm="tee" name="fd" dev="proc" ino=33168 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:38:33.231000 audit[3900]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe5b6ef966 a2=241 a3=1b6 items=1 ppid=3865 pid=3900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.320686 kernel: audit: type=1300 audit(1707435513.231:293): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe5b6ef966 a2=241 a3=1b6 items=1 ppid=3865 pid=3900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.231000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 8 23:38:33.327687 kernel: audit: type=1307 audit(1707435513.231:293): cwd="/etc/service/enabled/node-status-reporter/log" Feb 8 23:38:33.327766 kernel: audit: type=1302 audit(1707435513.231:293): item=0 name="/dev/fd/63" inode=33151 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:33.231000 audit: PATH item=0 name="/dev/fd/63" inode=33151 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:33.231000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:38:33.354686 kernel: audit: type=1327 audit(1707435513.231:293): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:38:33.312000 audit[3909]: AVC avc: denied { write } for pid=3909 comm="tee" name="fd" dev="proc" ino=32749 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:38:33.312000 audit[3909]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffde1813975 a2=241 a3=1b6 items=1 ppid=3867 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.312000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 8 23:38:33.312000 audit: PATH item=0 name="/dev/fd/63" inode=33170 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:33.312000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:38:33.352000 audit[3916]: AVC avc: denied { write } for pid=3916 comm="tee" name="fd" dev="proc" ino=33799 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:38:33.352000 audit[3916]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc6d368975 a2=241 a3=1b6 items=1 ppid=3875 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.352000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 8 23:38:33.352000 audit: PATH item=0 name="/dev/fd/63" inode=32753 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:33.352000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:38:33.352000 audit[3920]: AVC avc: denied { write } for pid=3920 comm="tee" name="fd" dev="proc" ino=33802 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:38:33.352000 audit[3920]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff751f1976 a2=241 a3=1b6 items=1 ppid=3884 pid=3920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.352000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 8 23:38:33.352000 audit: PATH item=0 name="/dev/fd/63" inode=32756 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:33.352000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:38:33.359000 audit[3925]: AVC avc: denied { write } for pid=3925 comm="tee" name="fd" dev="proc" ino=33173 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:38:33.359000 audit[3925]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd3aaf6977 a2=241 a3=1b6 items=1 ppid=3882 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.359000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 8 23:38:33.359000 audit: PATH item=0 name="/dev/fd/63" inode=32765 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:33.359000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:38:33.363000 audit[3928]: AVC avc: denied { write } for pid=3928 comm="tee" name="fd" dev="proc" ino=33177 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 8 23:38:33.363000 audit[3928]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff2f58e965 a2=241 a3=1b6 items=1 ppid=3878 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.363000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 8 23:38:33.363000 audit: PATH item=0 name="/dev/fd/63" inode=32768 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:33.363000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 8 23:38:33.664912 env[1405]: time="2024-02-08T23:38:33.664793607Z" level=info msg="StopPodSandbox for \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\"" Feb 8 23:38:33.667547 env[1405]: time="2024-02-08T23:38:33.667501218Z" level=info msg="StopPodSandbox for \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\"" Feb 8 23:38:33.745000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.745000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.745000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.745000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.745000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.745000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.745000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.745000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.745000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.745000 audit: BPF prog-id=10 op=LOAD Feb 8 23:38:33.745000 audit[4040]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff8cd74f40 a2=70 a3=7efe393d4000 items=0 ppid=3877 pid=4040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.745000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:38:33.747000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit: BPF prog-id=11 op=LOAD Feb 8 23:38:33.747000 audit[4040]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff8cd74f40 a2=70 a3=6e items=0 ppid=3877 pid=4040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.747000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:38:33.747000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff8cd74ef0 a2=70 a3=7fff8cd74f40 items=0 ppid=3877 pid=4040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.747000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit: BPF prog-id=12 op=LOAD Feb 8 23:38:33.747000 audit[4040]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff8cd74ed0 a2=70 a3=7fff8cd74f40 items=0 ppid=3877 pid=4040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.747000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:38:33.747000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff8cd74fb0 a2=70 a3=0 items=0 ppid=3877 pid=4040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.747000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff8cd74fa0 a2=70 a3=0 items=0 ppid=3877 pid=4040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.747000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:38:33.747000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.747000 audit[4040]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fff8cd74fe0 a2=70 a3=0 items=0 ppid=3877 pid=4040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.747000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:38:33.748000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.748000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.748000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.748000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.748000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.748000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.748000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.748000 audit[4040]: AVC avc: denied { perfmon } for pid=4040 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.748000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.748000 audit[4040]: AVC avc: denied { bpf } for pid=4040 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.748000 audit: BPF prog-id=13 op=LOAD Feb 8 23:38:33.748000 audit[4040]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff8cd74f00 a2=70 a3=ffffffff items=0 ppid=3877 pid=4040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.748000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 8 23:38:33.759000 audit[4044]: AVC avc: denied { bpf } for pid=4044 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.759000 audit[4044]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe8ddb7db0 a2=70 a3=fff80800 items=0 ppid=3877 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.759000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 8 23:38:33.759000 audit[4044]: AVC avc: denied { bpf } for pid=4044 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 8 23:38:33.759000 audit[4044]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe8ddb7c80 a2=70 a3=3 items=0 ppid=3877 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:33.759000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 8 23:38:33.767000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:38:33.842756 env[1405]: 2024-02-08 23:38:33.772 [INFO][4021] k8s.go 578: Cleaning up netns ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Feb 8 23:38:33.842756 env[1405]: 2024-02-08 23:38:33.772 [INFO][4021] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" iface="eth0" netns="/var/run/netns/cni-a48e55b5-7b5b-b790-7ff9-feb202b464d3" Feb 8 23:38:33.842756 env[1405]: 2024-02-08 23:38:33.774 [INFO][4021] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" iface="eth0" netns="/var/run/netns/cni-a48e55b5-7b5b-b790-7ff9-feb202b464d3" Feb 8 23:38:33.842756 env[1405]: 2024-02-08 23:38:33.785 [INFO][4021] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" iface="eth0" netns="/var/run/netns/cni-a48e55b5-7b5b-b790-7ff9-feb202b464d3" Feb 8 23:38:33.842756 env[1405]: 2024-02-08 23:38:33.785 [INFO][4021] k8s.go 585: Releasing IP address(es) ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Feb 8 23:38:33.842756 env[1405]: 2024-02-08 23:38:33.785 [INFO][4021] utils.go 188: Calico CNI releasing IP address ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Feb 8 23:38:33.842756 env[1405]: 2024-02-08 23:38:33.826 [INFO][4051] ipam_plugin.go 415: Releasing address using handleID ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" HandleID="k8s-pod-network.576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" Feb 8 23:38:33.842756 env[1405]: 2024-02-08 23:38:33.828 [INFO][4051] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:38:33.842756 env[1405]: 2024-02-08 23:38:33.828 [INFO][4051] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:38:33.842756 env[1405]: 2024-02-08 23:38:33.837 [WARNING][4051] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" HandleID="k8s-pod-network.576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" Feb 8 23:38:33.842756 env[1405]: 2024-02-08 23:38:33.837 [INFO][4051] ipam_plugin.go 443: Releasing address using workloadID ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" HandleID="k8s-pod-network.576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" Feb 8 23:38:33.842756 env[1405]: 2024-02-08 23:38:33.838 [INFO][4051] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:38:33.842756 env[1405]: 2024-02-08 23:38:33.840 [INFO][4021] k8s.go 591: Teardown processing complete. ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Feb 8 23:38:33.843926 env[1405]: time="2024-02-08T23:38:33.843887185Z" level=info msg="TearDown network for sandbox \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\" successfully" Feb 8 23:38:33.844084 env[1405]: time="2024-02-08T23:38:33.844063186Z" level=info msg="StopPodSandbox for \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\" returns successfully" Feb 8 23:38:33.845166 env[1405]: time="2024-02-08T23:38:33.845129990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-r5zfj,Uid:d47f938b-76c5-40ea-8321-fd8530afd202,Namespace:kube-system,Attempt:1,}" Feb 8 23:38:33.847012 systemd[1]: run-netns-cni\x2da48e55b5\x2d7b5b\x2db790\x2d7ff9\x2dfeb202b464d3.mount: Deactivated successfully. Feb 8 23:38:33.861834 env[1405]: 2024-02-08 23:38:33.768 [INFO][4012] k8s.go 578: Cleaning up netns ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Feb 8 23:38:33.861834 env[1405]: 2024-02-08 23:38:33.768 [INFO][4012] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" iface="eth0" netns="/var/run/netns/cni-649ee74b-db80-dd88-9e83-622466948351" Feb 8 23:38:33.861834 env[1405]: 2024-02-08 23:38:33.768 [INFO][4012] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" iface="eth0" netns="/var/run/netns/cni-649ee74b-db80-dd88-9e83-622466948351" Feb 8 23:38:33.861834 env[1405]: 2024-02-08 23:38:33.768 [INFO][4012] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" iface="eth0" netns="/var/run/netns/cni-649ee74b-db80-dd88-9e83-622466948351" Feb 8 23:38:33.861834 env[1405]: 2024-02-08 23:38:33.768 [INFO][4012] k8s.go 585: Releasing IP address(es) ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Feb 8 23:38:33.861834 env[1405]: 2024-02-08 23:38:33.769 [INFO][4012] utils.go 188: Calico CNI releasing IP address ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Feb 8 23:38:33.861834 env[1405]: 2024-02-08 23:38:33.838 [INFO][4048] ipam_plugin.go 415: Releasing address using handleID ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" HandleID="k8s-pod-network.14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Workload="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" Feb 8 23:38:33.861834 env[1405]: 2024-02-08 23:38:33.838 [INFO][4048] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:38:33.861834 env[1405]: 2024-02-08 23:38:33.838 [INFO][4048] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:38:33.861834 env[1405]: 2024-02-08 23:38:33.851 [WARNING][4048] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" HandleID="k8s-pod-network.14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Workload="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" Feb 8 23:38:33.861834 env[1405]: 2024-02-08 23:38:33.851 [INFO][4048] ipam_plugin.go 443: Releasing address using workloadID ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" HandleID="k8s-pod-network.14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Workload="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" Feb 8 23:38:33.861834 env[1405]: 2024-02-08 23:38:33.852 [INFO][4048] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:38:33.861834 env[1405]: 2024-02-08 23:38:33.860 [INFO][4012] k8s.go 591: Teardown processing complete. ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Feb 8 23:38:33.866429 env[1405]: time="2024-02-08T23:38:33.866383070Z" level=info msg="TearDown network for sandbox \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\" successfully" Feb 8 23:38:33.868379 env[1405]: time="2024-02-08T23:38:33.866485971Z" level=info msg="StopPodSandbox for \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\" returns successfully" Feb 8 23:38:33.868379 env[1405]: time="2024-02-08T23:38:33.867077873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-868b7ffccf-pz49r,Uid:c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b,Namespace:calico-system,Attempt:1,}" Feb 8 23:38:33.867032 systemd[1]: run-netns-cni\x2d649ee74b\x2ddb80\x2ddd88\x2d9e83\x2d622466948351.mount: Deactivated successfully. Feb 8 23:38:34.070000 audit[4117]: NETFILTER_CFG table=raw:117 family=2 entries=19 op=nft_register_chain pid=4117 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:38:34.070000 audit[4117]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffc60951550 a2=0 a3=7ffc6095153c items=0 ppid=3877 pid=4117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:34.070000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:38:34.075000 audit[4122]: NETFILTER_CFG table=mangle:118 family=2 entries=19 op=nft_register_chain pid=4122 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:38:34.075000 audit[4122]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffc91cbf780 a2=0 a3=7ffc91cbf76c items=0 ppid=3877 pid=4122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:34.075000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:38:34.087000 audit[4118]: NETFILTER_CFG table=nat:119 family=2 entries=16 op=nft_register_chain pid=4118 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:38:34.087000 audit[4118]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffc206aab60 a2=0 a3=55e9c9e6e000 items=0 ppid=3877 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:34.087000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:38:34.090000 audit[4119]: NETFILTER_CFG table=filter:120 family=2 entries=39 op=nft_register_chain pid=4119 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:38:34.090000 audit[4119]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7fff50237080 a2=0 a3=558c78343000 items=0 ppid=3877 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:34.090000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:38:34.097457 systemd-networkd[1576]: cali4dbccb8f7dc: Link UP Feb 8 23:38:34.107904 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4dbccb8f7dc: link becomes ready Feb 8 23:38:34.103559 systemd-networkd[1576]: cali4dbccb8f7dc: Gained carrier Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:33.955 [INFO][4072] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0 coredns-787d4945fb- kube-system d47f938b-76c5-40ea-8321-fd8530afd202 756 0 2024-02-08 23:37:29 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.2-a-9933156126 coredns-787d4945fb-r5zfj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4dbccb8f7dc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" Namespace="kube-system" Pod="coredns-787d4945fb-r5zfj" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-" Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:33.955 [INFO][4072] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" Namespace="kube-system" Pod="coredns-787d4945fb-r5zfj" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:33.999 [INFO][4095] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" HandleID="k8s-pod-network.1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:34.012 [INFO][4095] ipam_plugin.go 268: Auto assigning IP ContainerID="1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" HandleID="k8s-pod-network.1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d930), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.2-a-9933156126", "pod":"coredns-787d4945fb-r5zfj", "timestamp":"2024-02-08 23:38:33.999778075 +0000 UTC"}, Hostname:"ci-3510.3.2-a-9933156126", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:34.013 [INFO][4095] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:34.013 [INFO][4095] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:34.013 [INFO][4095] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-9933156126' Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:34.018 [INFO][4095] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:34.028 [INFO][4095] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:34.035 [INFO][4095] ipam.go 489: Trying affinity for 192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:34.044 [INFO][4095] ipam.go 155: Attempting to load block cidr=192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:34.054 [INFO][4095] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:34.054 [INFO][4095] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.4.64/26 handle="k8s-pod-network.1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:34.055 [INFO][4095] ipam.go 1682: Creating new handle: k8s-pod-network.1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81 Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:34.060 [INFO][4095] ipam.go 1203: Writing block in order to claim IPs block=192.168.4.64/26 handle="k8s-pod-network.1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:34.065 [INFO][4095] ipam.go 1216: Successfully claimed IPs: [192.168.4.65/26] block=192.168.4.64/26 handle="k8s-pod-network.1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:34.066 [INFO][4095] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.4.65/26] handle="k8s-pod-network.1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:34.066 [INFO][4095] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:38:34.126477 env[1405]: 2024-02-08 23:38:34.066 [INFO][4095] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.4.65/26] IPv6=[] ContainerID="1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" HandleID="k8s-pod-network.1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" Feb 8 23:38:34.127849 env[1405]: 2024-02-08 23:38:34.072 [INFO][4072] k8s.go 385: Populated endpoint ContainerID="1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" Namespace="kube-system" Pod="coredns-787d4945fb-r5zfj" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"d47f938b-76c5-40ea-8321-fd8530afd202", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 37, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"", Pod:"coredns-787d4945fb-r5zfj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4dbccb8f7dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:38:34.127849 env[1405]: 2024-02-08 23:38:34.072 [INFO][4072] k8s.go 386: Calico CNI using IPs: [192.168.4.65/32] ContainerID="1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" Namespace="kube-system" Pod="coredns-787d4945fb-r5zfj" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" Feb 8 23:38:34.127849 env[1405]: 2024-02-08 23:38:34.072 [INFO][4072] dataplane_linux.go 68: Setting the host side veth name to cali4dbccb8f7dc ContainerID="1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" Namespace="kube-system" Pod="coredns-787d4945fb-r5zfj" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" Feb 8 23:38:34.127849 env[1405]: 2024-02-08 23:38:34.111 [INFO][4072] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" Namespace="kube-system" Pod="coredns-787d4945fb-r5zfj" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" Feb 8 23:38:34.127849 env[1405]: 2024-02-08 23:38:34.111 [INFO][4072] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" Namespace="kube-system" Pod="coredns-787d4945fb-r5zfj" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"d47f938b-76c5-40ea-8321-fd8530afd202", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 37, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81", Pod:"coredns-787d4945fb-r5zfj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4dbccb8f7dc", MAC:"ee:bf:97:fe:bf:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:38:34.127849 env[1405]: 2024-02-08 23:38:34.125 [INFO][4072] k8s.go 491: Wrote updated endpoint to datastore ContainerID="1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81" Namespace="kube-system" Pod="coredns-787d4945fb-r5zfj" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" Feb 8 23:38:34.148013 systemd-networkd[1576]: caliace42da70cc: Link UP Feb 8 23:38:34.154396 systemd-networkd[1576]: caliace42da70cc: Gained carrier Feb 8 23:38:34.154702 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliace42da70cc: link becomes ready Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.019 [INFO][4077] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0 calico-kube-controllers-868b7ffccf- calico-system c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b 755 0 2024-02-08 23:37:34 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:868b7ffccf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.2-a-9933156126 calico-kube-controllers-868b7ffccf-pz49r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliace42da70cc [] []}} ContainerID="ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" Namespace="calico-system" Pod="calico-kube-controllers-868b7ffccf-pz49r" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-" Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.019 [INFO][4077] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" Namespace="calico-system" Pod="calico-kube-controllers-868b7ffccf-pz49r" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.074 [INFO][4112] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" HandleID="k8s-pod-network.ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" Workload="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.090 [INFO][4112] ipam_plugin.go 268: Auto assigning IP ContainerID="ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" HandleID="k8s-pod-network.ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" Workload="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027ca20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.2-a-9933156126", "pod":"calico-kube-controllers-868b7ffccf-pz49r", "timestamp":"2024-02-08 23:38:34.074523957 +0000 UTC"}, Hostname:"ci-3510.3.2-a-9933156126", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.091 [INFO][4112] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.091 [INFO][4112] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.091 [INFO][4112] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-9933156126' Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.094 [INFO][4112] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.108 [INFO][4112] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.125 [INFO][4112] ipam.go 489: Trying affinity for 192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.128 [INFO][4112] ipam.go 155: Attempting to load block cidr=192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.130 [INFO][4112] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.130 [INFO][4112] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.4.64/26 handle="k8s-pod-network.ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.131 [INFO][4112] ipam.go 1682: Creating new handle: k8s-pod-network.ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.136 [INFO][4112] ipam.go 1203: Writing block in order to claim IPs block=192.168.4.64/26 handle="k8s-pod-network.ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.141 [INFO][4112] ipam.go 1216: Successfully claimed IPs: [192.168.4.66/26] block=192.168.4.64/26 handle="k8s-pod-network.ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.141 [INFO][4112] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.4.66/26] handle="k8s-pod-network.ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.141 [INFO][4112] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:38:34.171154 env[1405]: 2024-02-08 23:38:34.141 [INFO][4112] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.4.66/26] IPv6=[] ContainerID="ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" HandleID="k8s-pod-network.ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" Workload="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" Feb 8 23:38:34.172227 env[1405]: 2024-02-08 23:38:34.143 [INFO][4077] k8s.go 385: Populated endpoint ContainerID="ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" Namespace="calico-system" Pod="calico-kube-controllers-868b7ffccf-pz49r" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0", GenerateName:"calico-kube-controllers-868b7ffccf-", Namespace:"calico-system", SelfLink:"", UID:"c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 37, 34, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"868b7ffccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"", Pod:"calico-kube-controllers-868b7ffccf-pz49r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliace42da70cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:38:34.172227 env[1405]: 2024-02-08 23:38:34.143 [INFO][4077] k8s.go 386: Calico CNI using IPs: [192.168.4.66/32] ContainerID="ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" Namespace="calico-system" Pod="calico-kube-controllers-868b7ffccf-pz49r" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" Feb 8 23:38:34.172227 env[1405]: 2024-02-08 23:38:34.143 [INFO][4077] dataplane_linux.go 68: Setting the host side veth name to caliace42da70cc ContainerID="ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" Namespace="calico-system" Pod="calico-kube-controllers-868b7ffccf-pz49r" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" Feb 8 23:38:34.172227 env[1405]: 2024-02-08 23:38:34.155 [INFO][4077] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" Namespace="calico-system" Pod="calico-kube-controllers-868b7ffccf-pz49r" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" Feb 8 23:38:34.172227 env[1405]: 2024-02-08 23:38:34.155 [INFO][4077] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" Namespace="calico-system" Pod="calico-kube-controllers-868b7ffccf-pz49r" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0", GenerateName:"calico-kube-controllers-868b7ffccf-", Namespace:"calico-system", SelfLink:"", UID:"c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 37, 34, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"868b7ffccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e", Pod:"calico-kube-controllers-868b7ffccf-pz49r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliace42da70cc", MAC:"c2:70:c2:9f:bc:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:38:34.172227 env[1405]: 2024-02-08 23:38:34.169 [INFO][4077] k8s.go 491: Wrote updated endpoint to datastore ContainerID="ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e" Namespace="calico-system" Pod="calico-kube-controllers-868b7ffccf-pz49r" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" Feb 8 23:38:34.172752 env[1405]: time="2024-02-08T23:38:34.171893023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:38:34.172752 env[1405]: time="2024-02-08T23:38:34.171938323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:38:34.172752 env[1405]: time="2024-02-08T23:38:34.171953923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:38:34.172752 env[1405]: time="2024-02-08T23:38:34.172094724Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81 pid=4150 runtime=io.containerd.runc.v2 Feb 8 23:38:34.195631 env[1405]: time="2024-02-08T23:38:34.195560012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:38:34.195903 env[1405]: time="2024-02-08T23:38:34.195860813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:38:34.196057 env[1405]: time="2024-02-08T23:38:34.196030714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:38:34.196363 env[1405]: time="2024-02-08T23:38:34.196325915Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e pid=4187 runtime=io.containerd.runc.v2 Feb 8 23:38:34.256105 env[1405]: time="2024-02-08T23:38:34.256064040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-r5zfj,Uid:d47f938b-76c5-40ea-8321-fd8530afd202,Namespace:kube-system,Attempt:1,} returns sandbox id \"1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81\"" Feb 8 23:38:34.259383 env[1405]: time="2024-02-08T23:38:34.259346052Z" level=info msg="CreateContainer within sandbox \"1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:38:34.275000 audit[4233]: NETFILTER_CFG table=filter:121 family=2 entries=68 op=nft_register_chain pid=4233 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:38:34.275000 audit[4233]: SYSCALL arch=c000003e syscall=46 success=yes exit=38072 a0=3 a1=7fffbd2fa850 a2=0 a3=7fffbd2fa83c items=0 ppid=3877 pid=4233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:34.275000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:38:34.287888 env[1405]: time="2024-02-08T23:38:34.287849759Z" level=info msg="CreateContainer within sandbox \"1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7ef0e9ad5f2576354e6eef478059815e6ff25e3d5d3eeb464dd0618a4e4864b9\"" Feb 8 23:38:34.288692 env[1405]: time="2024-02-08T23:38:34.288650662Z" level=info msg="StartContainer for \"7ef0e9ad5f2576354e6eef478059815e6ff25e3d5d3eeb464dd0618a4e4864b9\"" Feb 8 23:38:34.324090 env[1405]: time="2024-02-08T23:38:34.323965195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-868b7ffccf-pz49r,Uid:c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b,Namespace:calico-system,Attempt:1,} returns sandbox id \"ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e\"" Feb 8 23:38:34.326271 env[1405]: time="2024-02-08T23:38:34.326236204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 8 23:38:34.360845 env[1405]: time="2024-02-08T23:38:34.360796134Z" level=info msg="StartContainer for \"7ef0e9ad5f2576354e6eef478059815e6ff25e3d5d3eeb464dd0618a4e4864b9\" returns successfully" Feb 8 23:38:34.519323 systemd-networkd[1576]: vxlan.calico: Link UP Feb 8 23:38:34.519331 systemd-networkd[1576]: vxlan.calico: Gained carrier Feb 8 23:38:34.918248 kubelet[2651]: I0208 23:38:34.918212 2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-r5zfj" podStartSLOduration=65.918168231 pod.CreationTimestamp="2024-02-08 23:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:38:34.870556152 +0000 UTC m=+78.597318277" watchObservedRunningTime="2024-02-08 23:38:34.918168231 +0000 UTC m=+78.644930456" Feb 8 23:38:35.031000 audit[4302]: NETFILTER_CFG table=filter:122 family=2 entries=12 op=nft_register_rule pid=4302 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:35.031000 audit[4302]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffef302b790 a2=0 a3=7ffef302b77c items=0 ppid=2808 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:35.031000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:35.033000 audit[4302]: NETFILTER_CFG table=nat:123 family=2 entries=30 op=nft_register_rule pid=4302 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:35.033000 audit[4302]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffef302b790 a2=0 a3=7ffef302b77c items=0 ppid=2808 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:35.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:35.088000 audit[4328]: NETFILTER_CFG table=filter:124 family=2 entries=9 op=nft_register_rule pid=4328 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:35.088000 audit[4328]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe43778f30 a2=0 a3=7ffe43778f1c items=0 ppid=2808 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:35.088000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:35.090000 audit[4328]: NETFILTER_CFG table=nat:125 family=2 entries=51 op=nft_register_chain pid=4328 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:35.090000 audit[4328]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffe43778f30 a2=0 a3=7ffe43778f1c items=0 ppid=2808 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:35.090000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:35.536743 systemd[1]: run-containerd-runc-k8s.io-a1dc31ac7ad67788dcc47efdfc725038082aa67b9f2b203c8fd82332a70da0ac-runc.lW2wEe.mount: Deactivated successfully. Feb 8 23:38:35.644864 systemd-networkd[1576]: cali4dbccb8f7dc: Gained IPv6LL Feb 8 23:38:35.965836 systemd-networkd[1576]: caliace42da70cc: Gained IPv6LL Feb 8 23:38:36.028846 systemd-networkd[1576]: vxlan.calico: Gained IPv6LL Feb 8 23:38:36.665351 env[1405]: time="2024-02-08T23:38:36.665256052Z" level=info msg="StopPodSandbox for \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\"" Feb 8 23:38:36.757449 env[1405]: 2024-02-08 23:38:36.725 [INFO][4368] k8s.go 578: Cleaning up netns ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Feb 8 23:38:36.757449 env[1405]: 2024-02-08 23:38:36.725 [INFO][4368] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" iface="eth0" netns="/var/run/netns/cni-4dc20d41-ead8-079a-6e41-310cf43736c6" Feb 8 23:38:36.757449 env[1405]: 2024-02-08 23:38:36.725 [INFO][4368] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" iface="eth0" netns="/var/run/netns/cni-4dc20d41-ead8-079a-6e41-310cf43736c6" Feb 8 23:38:36.757449 env[1405]: 2024-02-08 23:38:36.725 [INFO][4368] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" iface="eth0" netns="/var/run/netns/cni-4dc20d41-ead8-079a-6e41-310cf43736c6" Feb 8 23:38:36.757449 env[1405]: 2024-02-08 23:38:36.725 [INFO][4368] k8s.go 585: Releasing IP address(es) ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Feb 8 23:38:36.757449 env[1405]: 2024-02-08 23:38:36.725 [INFO][4368] utils.go 188: Calico CNI releasing IP address ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Feb 8 23:38:36.757449 env[1405]: 2024-02-08 23:38:36.746 [INFO][4375] ipam_plugin.go 415: Releasing address using handleID ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" HandleID="k8s-pod-network.248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Workload="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" Feb 8 23:38:36.757449 env[1405]: 2024-02-08 23:38:36.747 [INFO][4375] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:38:36.757449 env[1405]: 2024-02-08 23:38:36.747 [INFO][4375] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:38:36.757449 env[1405]: 2024-02-08 23:38:36.753 [WARNING][4375] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" HandleID="k8s-pod-network.248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Workload="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" Feb 8 23:38:36.757449 env[1405]: 2024-02-08 23:38:36.753 [INFO][4375] ipam_plugin.go 443: Releasing address using workloadID ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" HandleID="k8s-pod-network.248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Workload="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" Feb 8 23:38:36.757449 env[1405]: 2024-02-08 23:38:36.755 [INFO][4375] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:38:36.757449 env[1405]: 2024-02-08 23:38:36.756 [INFO][4368] k8s.go 591: Teardown processing complete. ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Feb 8 23:38:36.760904 systemd[1]: run-netns-cni\x2d4dc20d41\x2dead8\x2d079a\x2d6e41\x2d310cf43736c6.mount: Deactivated successfully. Feb 8 23:38:36.762081 env[1405]: time="2024-02-08T23:38:36.762037812Z" level=info msg="TearDown network for sandbox \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\" successfully" Feb 8 23:38:36.762142 env[1405]: time="2024-02-08T23:38:36.762081612Z" level=info msg="StopPodSandbox for \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\" returns successfully" Feb 8 23:38:36.762767 env[1405]: time="2024-02-08T23:38:36.762728415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pfb4q,Uid:b23333af-8873-429e-8aa7-941ea237b3cf,Namespace:calico-system,Attempt:1,}" Feb 8 23:38:36.904526 systemd-networkd[1576]: cali75932020261: Link UP Feb 8 23:38:36.915337 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:38:36.915460 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali75932020261: link becomes ready Feb 8 23:38:36.916893 systemd-networkd[1576]: cali75932020261: Gained carrier Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.837 [INFO][4381] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0 csi-node-driver- calico-system b23333af-8873-429e-8aa7-941ea237b3cf 783 0 2024-02-08 23:37:34 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3510.3.2-a-9933156126 csi-node-driver-pfb4q eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali75932020261 [] []}} ContainerID="86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" Namespace="calico-system" Pod="csi-node-driver-pfb4q" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-" Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.837 [INFO][4381] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" Namespace="calico-system" Pod="csi-node-driver-pfb4q" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.866 [INFO][4393] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" HandleID="k8s-pod-network.86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" Workload="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.875 [INFO][4393] ipam_plugin.go 268: Auto assigning IP ContainerID="86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" HandleID="k8s-pod-network.86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" Workload="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027da10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.2-a-9933156126", "pod":"csi-node-driver-pfb4q", "timestamp":"2024-02-08 23:38:36.866647401 +0000 UTC"}, Hostname:"ci-3510.3.2-a-9933156126", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.875 [INFO][4393] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.875 [INFO][4393] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.875 [INFO][4393] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-9933156126' Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.877 [INFO][4393] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.880 [INFO][4393] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-9933156126" Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.883 [INFO][4393] ipam.go 489: Trying affinity for 192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.885 [INFO][4393] ipam.go 155: Attempting to load block cidr=192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.887 [INFO][4393] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.887 [INFO][4393] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.4.64/26 handle="k8s-pod-network.86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.888 [INFO][4393] ipam.go 1682: Creating new handle: k8s-pod-network.86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.891 [INFO][4393] ipam.go 1203: Writing block in order to claim IPs block=192.168.4.64/26 handle="k8s-pod-network.86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.896 [INFO][4393] ipam.go 1216: Successfully claimed IPs: [192.168.4.67/26] block=192.168.4.64/26 handle="k8s-pod-network.86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.896 [INFO][4393] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.4.67/26] handle="k8s-pod-network.86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.896 [INFO][4393] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:38:36.937203 env[1405]: 2024-02-08 23:38:36.896 [INFO][4393] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.4.67/26] IPv6=[] ContainerID="86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" HandleID="k8s-pod-network.86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" Workload="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" Feb 8 23:38:36.938071 env[1405]: 2024-02-08 23:38:36.900 [INFO][4381] k8s.go 385: Populated endpoint ContainerID="86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" Namespace="calico-system" Pod="csi-node-driver-pfb4q" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b23333af-8873-429e-8aa7-941ea237b3cf", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 37, 34, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"", Pod:"csi-node-driver-pfb4q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.4.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali75932020261", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:38:36.938071 env[1405]: 2024-02-08 23:38:36.901 [INFO][4381] k8s.go 386: Calico CNI using IPs: [192.168.4.67/32] ContainerID="86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" Namespace="calico-system" Pod="csi-node-driver-pfb4q" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" Feb 8 23:38:36.938071 env[1405]: 2024-02-08 23:38:36.901 [INFO][4381] dataplane_linux.go 68: Setting the host side veth name to cali75932020261 ContainerID="86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" Namespace="calico-system" Pod="csi-node-driver-pfb4q" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" Feb 8 23:38:36.938071 env[1405]: 2024-02-08 23:38:36.916 [INFO][4381] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" Namespace="calico-system" Pod="csi-node-driver-pfb4q" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" Feb 8 23:38:36.938071 env[1405]: 2024-02-08 23:38:36.916 [INFO][4381] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" Namespace="calico-system" Pod="csi-node-driver-pfb4q" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b23333af-8873-429e-8aa7-941ea237b3cf", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 37, 34, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf", Pod:"csi-node-driver-pfb4q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.4.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali75932020261", MAC:"72:2e:2a:c4:d4:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:38:36.938071 env[1405]: 2024-02-08 23:38:36.935 [INFO][4381] k8s.go 491: Wrote updated endpoint to datastore ContainerID="86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf" Namespace="calico-system" Pod="csi-node-driver-pfb4q" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" Feb 8 23:38:36.960330 env[1405]: time="2024-02-08T23:38:36.960237149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:38:36.960589 env[1405]: time="2024-02-08T23:38:36.960560550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:38:36.960718 env[1405]: time="2024-02-08T23:38:36.960694451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:38:36.960957 env[1405]: time="2024-02-08T23:38:36.960925352Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf pid=4421 runtime=io.containerd.runc.v2 Feb 8 23:38:37.031000 audit[4445]: NETFILTER_CFG table=filter:126 family=2 entries=38 op=nft_register_chain pid=4445 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:38:37.031000 audit[4445]: SYSCALL arch=c000003e syscall=46 success=yes exit=19508 a0=3 a1=7ffc3df9b7c0 a2=0 a3=7ffc3df9b7ac items=0 ppid=3877 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:37.031000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:38:37.062327 env[1405]: time="2024-02-08T23:38:37.062281727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pfb4q,Uid:b23333af-8873-429e-8aa7-941ea237b3cf,Namespace:calico-system,Attempt:1,} returns sandbox id \"86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf\"" Feb 8 23:38:38.140919 systemd-networkd[1576]: cali75932020261: Gained IPv6LL Feb 8 23:38:38.666009 env[1405]: time="2024-02-08T23:38:38.665962443Z" level=info msg="StopPodSandbox for \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\"" Feb 8 23:38:38.762878 env[1405]: 2024-02-08 23:38:38.724 [INFO][4470] k8s.go 578: Cleaning up netns ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Feb 8 23:38:38.762878 env[1405]: 2024-02-08 23:38:38.724 [INFO][4470] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" iface="eth0" netns="/var/run/netns/cni-98b14e6d-6ff2-bc3d-a559-1aefaa15c046" Feb 8 23:38:38.762878 env[1405]: 2024-02-08 23:38:38.725 [INFO][4470] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" iface="eth0" netns="/var/run/netns/cni-98b14e6d-6ff2-bc3d-a559-1aefaa15c046" Feb 8 23:38:38.762878 env[1405]: 2024-02-08 23:38:38.725 [INFO][4470] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" iface="eth0" netns="/var/run/netns/cni-98b14e6d-6ff2-bc3d-a559-1aefaa15c046" Feb 8 23:38:38.762878 env[1405]: 2024-02-08 23:38:38.725 [INFO][4470] k8s.go 585: Releasing IP address(es) ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Feb 8 23:38:38.762878 env[1405]: 2024-02-08 23:38:38.725 [INFO][4470] utils.go 188: Calico CNI releasing IP address ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Feb 8 23:38:38.762878 env[1405]: 2024-02-08 23:38:38.754 [INFO][4476] ipam_plugin.go 415: Releasing address using handleID ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" HandleID="k8s-pod-network.a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" Feb 8 23:38:38.762878 env[1405]: 2024-02-08 23:38:38.754 [INFO][4476] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:38:38.762878 env[1405]: 2024-02-08 23:38:38.754 [INFO][4476] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:38:38.762878 env[1405]: 2024-02-08 23:38:38.759 [WARNING][4476] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" HandleID="k8s-pod-network.a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" Feb 8 23:38:38.762878 env[1405]: 2024-02-08 23:38:38.759 [INFO][4476] ipam_plugin.go 443: Releasing address using workloadID ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" HandleID="k8s-pod-network.a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" Feb 8 23:38:38.762878 env[1405]: 2024-02-08 23:38:38.760 [INFO][4476] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:38:38.762878 env[1405]: 2024-02-08 23:38:38.761 [INFO][4470] k8s.go 591: Teardown processing complete. ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Feb 8 23:38:38.766813 systemd[1]: run-netns-cni\x2d98b14e6d\x2d6ff2\x2dbc3d\x2da559\x2d1aefaa15c046.mount: Deactivated successfully. Feb 8 23:38:38.767842 env[1405]: time="2024-02-08T23:38:38.767796417Z" level=info msg="TearDown network for sandbox \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\" successfully" Feb 8 23:38:38.767958 env[1405]: time="2024-02-08T23:38:38.767839417Z" level=info msg="StopPodSandbox for \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\" returns successfully" Feb 8 23:38:38.768610 env[1405]: time="2024-02-08T23:38:38.768579420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-7nz52,Uid:f03ff7cf-0bee-448e-9a60-80431e41383c,Namespace:kube-system,Attempt:1,}" Feb 8 23:38:38.895692 systemd-networkd[1576]: cali50556f6b91f: Link UP Feb 8 23:38:38.908561 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:38:38.908700 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali50556f6b91f: link becomes ready Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.829 [INFO][4483] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0 coredns-787d4945fb- kube-system f03ff7cf-0bee-448e-9a60-80431e41383c 794 0 2024-02-08 23:37:29 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.2-a-9933156126 coredns-787d4945fb-7nz52 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali50556f6b91f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" Namespace="kube-system" Pod="coredns-787d4945fb-7nz52" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-" Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.829 [INFO][4483] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" Namespace="kube-system" Pod="coredns-787d4945fb-7nz52" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.858 [INFO][4496] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" HandleID="k8s-pod-network.411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.866 [INFO][4496] ipam_plugin.go 268: Auto assigning IP ContainerID="411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" HandleID="k8s-pod-network.411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00022f910), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.2-a-9933156126", "pod":"coredns-787d4945fb-7nz52", "timestamp":"2024-02-08 23:38:38.85830185 +0000 UTC"}, Hostname:"ci-3510.3.2-a-9933156126", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.866 [INFO][4496] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.867 [INFO][4496] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.867 [INFO][4496] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-9933156126' Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.868 [INFO][4496] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.872 [INFO][4496] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-9933156126" Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.876 [INFO][4496] ipam.go 489: Trying affinity for 192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.877 [INFO][4496] ipam.go 155: Attempting to load block cidr=192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.880 [INFO][4496] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.880 [INFO][4496] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.4.64/26 handle="k8s-pod-network.411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.882 [INFO][4496] ipam.go 1682: Creating new handle: k8s-pod-network.411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592 Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.885 [INFO][4496] ipam.go 1203: Writing block in order to claim IPs block=192.168.4.64/26 handle="k8s-pod-network.411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.890 [INFO][4496] ipam.go 1216: Successfully claimed IPs: [192.168.4.68/26] block=192.168.4.64/26 handle="k8s-pod-network.411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.890 [INFO][4496] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.4.68/26] handle="k8s-pod-network.411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.890 [INFO][4496] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:38:38.918553 env[1405]: 2024-02-08 23:38:38.890 [INFO][4496] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.4.68/26] IPv6=[] ContainerID="411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" HandleID="k8s-pod-network.411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" Feb 8 23:38:38.914485 systemd-networkd[1576]: cali50556f6b91f: Gained carrier Feb 8 23:38:38.919583 env[1405]: 2024-02-08 23:38:38.891 [INFO][4483] k8s.go 385: Populated endpoint ContainerID="411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" Namespace="kube-system" Pod="coredns-787d4945fb-7nz52" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"f03ff7cf-0bee-448e-9a60-80431e41383c", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 37, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"", Pod:"coredns-787d4945fb-7nz52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50556f6b91f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:38:38.919583 env[1405]: 2024-02-08 23:38:38.892 [INFO][4483] k8s.go 386: Calico CNI using IPs: [192.168.4.68/32] ContainerID="411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" Namespace="kube-system" Pod="coredns-787d4945fb-7nz52" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" Feb 8 23:38:38.919583 env[1405]: 2024-02-08 23:38:38.892 [INFO][4483] dataplane_linux.go 68: Setting the host side veth name to cali50556f6b91f ContainerID="411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" Namespace="kube-system" Pod="coredns-787d4945fb-7nz52" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" Feb 8 23:38:38.919583 env[1405]: 2024-02-08 23:38:38.896 [INFO][4483] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" Namespace="kube-system" Pod="coredns-787d4945fb-7nz52" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" Feb 8 23:38:38.919583 env[1405]: 2024-02-08 23:38:38.902 [INFO][4483] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" Namespace="kube-system" Pod="coredns-787d4945fb-7nz52" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"f03ff7cf-0bee-448e-9a60-80431e41383c", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 37, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592", Pod:"coredns-787d4945fb-7nz52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50556f6b91f", MAC:"a2:81:04:54:ac:b0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:38:38.919583 env[1405]: 2024-02-08 23:38:38.909 [INFO][4483] k8s.go 491: Wrote updated endpoint to datastore ContainerID="411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592" Namespace="kube-system" Pod="coredns-787d4945fb-7nz52" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" Feb 8 23:38:38.971118 kernel: kauditd_printk_skb: 126 callbacks suppressed Feb 8 23:38:38.971241 kernel: audit: type=1325 audit(1707435518.947:323): table=filter:127 family=2 entries=38 op=nft_register_chain pid=4518 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:38:38.947000 audit[4518]: NETFILTER_CFG table=filter:127 family=2 entries=38 op=nft_register_chain pid=4518 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:38:38.971350 env[1405]: time="2024-02-08T23:38:38.948499082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:38:38.971350 env[1405]: time="2024-02-08T23:38:38.948581582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:38:38.971350 env[1405]: time="2024-02-08T23:38:38.948620782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:38:38.971350 env[1405]: time="2024-02-08T23:38:38.948807183Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592 pid=4522 runtime=io.containerd.runc.v2 Feb 8 23:38:38.947000 audit[4518]: SYSCALL arch=c000003e syscall=46 success=yes exit=19088 a0=3 a1=7ffc8cd92420 a2=0 a3=7ffc8cd9240c items=0 ppid=3877 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:38.998386 kernel: audit: type=1300 audit(1707435518.947:323): arch=c000003e syscall=46 success=yes exit=19088 a0=3 a1=7ffc8cd92420 a2=0 a3=7ffc8cd9240c items=0 ppid=3877 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:38.998497 kernel: audit: type=1327 audit(1707435518.947:323): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:38:38.947000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:38:39.052363 env[1405]: time="2024-02-08T23:38:39.052312762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-7nz52,Uid:f03ff7cf-0bee-448e-9a60-80431e41383c,Namespace:kube-system,Attempt:1,} returns sandbox id \"411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592\"" Feb 8 23:38:39.055513 env[1405]: time="2024-02-08T23:38:39.055474374Z" level=info msg="CreateContainer within sandbox \"411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:38:39.084277 env[1405]: time="2024-02-08T23:38:39.084214179Z" level=info msg="CreateContainer within sandbox \"411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd66bafa1069710c40d78cde52e8ff743458a392080299135cab931db9f1262c\"" Feb 8 23:38:39.085794 env[1405]: time="2024-02-08T23:38:39.085721485Z" level=info msg="StartContainer for \"dd66bafa1069710c40d78cde52e8ff743458a392080299135cab931db9f1262c\"" Feb 8 23:38:39.171245 env[1405]: time="2024-02-08T23:38:39.170376194Z" level=info msg="StartContainer for \"dd66bafa1069710c40d78cde52e8ff743458a392080299135cab931db9f1262c\" returns successfully" Feb 8 23:38:39.904917 kubelet[2651]: I0208 23:38:39.904873 2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-7nz52" podStartSLOduration=70.90483598 pod.CreationTimestamp="2024-02-08 23:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:38:39.892322034 +0000 UTC m=+83.619084159" watchObservedRunningTime="2024-02-08 23:38:39.90483598 +0000 UTC m=+83.631598105" Feb 8 23:38:39.984000 audit[4624]: NETFILTER_CFG table=filter:128 family=2 entries=6 op=nft_register_rule pid=4624 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:39.996329 kernel: audit: type=1325 audit(1707435519.984:324): table=filter:128 family=2 entries=6 op=nft_register_rule pid=4624 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:39.984000 audit[4624]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffee5c9a470 a2=0 a3=7ffee5c9a45c items=0 ppid=2808 pid=4624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:39.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:40.025986 kernel: audit: type=1300 audit(1707435519.984:324): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffee5c9a470 a2=0 a3=7ffee5c9a45c items=0 ppid=2808 pid=4624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:40.026083 kernel: audit: type=1327 audit(1707435519.984:324): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:39.985000 audit[4624]: NETFILTER_CFG table=nat:129 family=2 entries=60 op=nft_register_rule pid=4624 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:40.036403 kernel: audit: type=1325 audit(1707435519.985:325): table=nat:129 family=2 entries=60 op=nft_register_rule pid=4624 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:39.985000 audit[4624]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffee5c9a470 a2=0 a3=7ffee5c9a45c items=0 ppid=2808 pid=4624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:39.985000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:40.065217 kernel: audit: type=1300 audit(1707435519.985:325): arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffee5c9a470 a2=0 a3=7ffee5c9a45c items=0 ppid=2808 pid=4624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:40.065343 kernel: audit: type=1327 audit(1707435519.985:325): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:40.066701 kernel: audit: type=1325 audit(1707435520.056:326): table=filter:130 family=2 entries=6 op=nft_register_rule pid=4650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:40.056000 audit[4650]: NETFILTER_CFG table=filter:130 family=2 entries=6 op=nft_register_rule pid=4650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:40.056000 audit[4650]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffeacf8c5a0 a2=0 a3=7ffeacf8c58c items=0 ppid=2808 pid=4650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:40.056000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:40.085000 audit[4650]: NETFILTER_CFG table=nat:131 family=2 entries=72 op=nft_register_chain pid=4650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:40.085000 audit[4650]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffeacf8c5a0 a2=0 a3=7ffeacf8c58c items=0 ppid=2808 pid=4650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:40.085000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:40.892914 systemd-networkd[1576]: cali50556f6b91f: Gained IPv6LL Feb 8 23:38:48.836773 env[1405]: time="2024-02-08T23:38:48.836714476Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:48.842411 env[1405]: time="2024-02-08T23:38:48.842369596Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:48.846094 env[1405]: time="2024-02-08T23:38:48.846058109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:48.849904 env[1405]: time="2024-02-08T23:38:48.849870122Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:48.850615 env[1405]: time="2024-02-08T23:38:48.850581025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803\"" Feb 8 23:38:48.854290 env[1405]: time="2024-02-08T23:38:48.854257138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 8 23:38:48.869842 env[1405]: time="2024-02-08T23:38:48.869811792Z" level=info msg="CreateContainer within sandbox \"ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 8 23:38:48.894932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3445336462.mount: Deactivated successfully. Feb 8 23:38:48.905098 env[1405]: time="2024-02-08T23:38:48.905062116Z" level=info msg="CreateContainer within sandbox \"ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"893c70826461f65a9b909e05aa1fcd546fa3cf6e59d661751c1c1c04a2756ce9\"" Feb 8 23:38:48.907464 env[1405]: time="2024-02-08T23:38:48.905588518Z" level=info msg="StartContainer for \"893c70826461f65a9b909e05aa1fcd546fa3cf6e59d661751c1c1c04a2756ce9\"" Feb 8 23:38:48.974040 env[1405]: time="2024-02-08T23:38:48.973966557Z" level=info msg="StartContainer for \"893c70826461f65a9b909e05aa1fcd546fa3cf6e59d661751c1c1c04a2756ce9\" returns successfully" Feb 8 23:38:49.913971 kubelet[2651]: I0208 23:38:49.913935 2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-868b7ffccf-pz49r" podStartSLOduration=-9.223371960940882e+09 pod.CreationTimestamp="2024-02-08 23:37:34 +0000 UTC" firstStartedPulling="2024-02-08 23:38:34.325632201 +0000 UTC m=+78.052394326" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:38:49.910580027 +0000 UTC m=+93.637342252" watchObservedRunningTime="2024-02-08 23:38:49.913894038 +0000 UTC m=+93.640656263" Feb 8 23:38:49.936424 systemd[1]: run-containerd-runc-k8s.io-893c70826461f65a9b909e05aa1fcd546fa3cf6e59d661751c1c1c04a2756ce9-runc.s6DmMK.mount: Deactivated successfully. Feb 8 23:38:51.113001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1304358138.mount: Deactivated successfully. Feb 8 23:38:51.199456 systemd[1]: run-containerd-runc-k8s.io-893c70826461f65a9b909e05aa1fcd546fa3cf6e59d661751c1c1c04a2756ce9-runc.yHtril.mount: Deactivated successfully. Feb 8 23:38:51.926680 env[1405]: time="2024-02-08T23:38:51.926620158Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:51.932363 env[1405]: time="2024-02-08T23:38:51.932323601Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:51.936657 env[1405]: time="2024-02-08T23:38:51.936626409Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:51.940257 env[1405]: time="2024-02-08T23:38:51.940227599Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:51.940737 env[1405]: time="2024-02-08T23:38:51.940708811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 8 23:38:51.942960 env[1405]: time="2024-02-08T23:38:51.942931067Z" level=info msg="CreateContainer within sandbox \"86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 8 23:38:51.977537 env[1405]: time="2024-02-08T23:38:51.977498634Z" level=info msg="CreateContainer within sandbox \"86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"717a48b1a3f3b505fe098a47572761f1796b4b577d82235408035dc26fba9223\"" Feb 8 23:38:51.977959 env[1405]: time="2024-02-08T23:38:51.977930844Z" level=info msg="StartContainer for \"717a48b1a3f3b505fe098a47572761f1796b4b577d82235408035dc26fba9223\"" Feb 8 23:38:52.042977 env[1405]: time="2024-02-08T23:38:52.042876660Z" level=info msg="StartContainer for \"717a48b1a3f3b505fe098a47572761f1796b4b577d82235408035dc26fba9223\" returns successfully" Feb 8 23:38:52.044128 env[1405]: time="2024-02-08T23:38:52.044089990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 8 23:38:54.277035 kubelet[2651]: I0208 23:38:54.276994 2651 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:38:54.289153 kubelet[2651]: W0208 23:38:54.289122 2651 reflector.go:424] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-a-9933156126" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510.3.2-a-9933156126' and this object Feb 8 23:38:54.289350 kubelet[2651]: E0208 23:38:54.289329 2651 reflector.go:140] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-a-9933156126" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510.3.2-a-9933156126' and this object Feb 8 23:38:54.296162 kubelet[2651]: I0208 23:38:54.296137 2651 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:38:54.340032 kubelet[2651]: I0208 23:38:54.339998 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/89ba79cb-63bd-45d7-b6b1-b9ca0423a5d0-calico-apiserver-certs\") pod \"calico-apiserver-f558c5857-vqwp2\" (UID: \"89ba79cb-63bd-45d7-b6b1-b9ca0423a5d0\") " pod="calico-apiserver/calico-apiserver-f558c5857-vqwp2" Feb 8 23:38:54.340439 kubelet[2651]: I0208 23:38:54.340416 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzq4k\" (UniqueName: \"kubernetes.io/projected/89ba79cb-63bd-45d7-b6b1-b9ca0423a5d0-kube-api-access-kzq4k\") pod \"calico-apiserver-f558c5857-vqwp2\" (UID: \"89ba79cb-63bd-45d7-b6b1-b9ca0423a5d0\") " pod="calico-apiserver/calico-apiserver-f558c5857-vqwp2" Feb 8 23:38:54.341817 kubelet[2651]: I0208 23:38:54.341774 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a42be8a2-ca76-44d2-a2fd-a773e658dfb4-calico-apiserver-certs\") pod \"calico-apiserver-f558c5857-67rpl\" (UID: \"a42be8a2-ca76-44d2-a2fd-a773e658dfb4\") " pod="calico-apiserver/calico-apiserver-f558c5857-67rpl" Feb 8 23:38:54.342305 kubelet[2651]: I0208 23:38:54.342286 2651 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g2pp\" (UniqueName: \"kubernetes.io/projected/a42be8a2-ca76-44d2-a2fd-a773e658dfb4-kube-api-access-2g2pp\") pod \"calico-apiserver-f558c5857-67rpl\" (UID: \"a42be8a2-ca76-44d2-a2fd-a773e658dfb4\") " pod="calico-apiserver/calico-apiserver-f558c5857-67rpl" Feb 8 23:38:54.370000 audit[4808]: NETFILTER_CFG table=filter:132 family=2 entries=7 op=nft_register_rule pid=4808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:54.375317 kernel: kauditd_printk_skb: 5 callbacks suppressed Feb 8 23:38:54.375413 kernel: audit: type=1325 audit(1707435534.370:328): table=filter:132 family=2 entries=7 op=nft_register_rule pid=4808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:54.370000 audit[4808]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffe60720a00 a2=0 a3=7ffe607209ec items=0 ppid=2808 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:54.404069 kernel: audit: type=1300 audit(1707435534.370:328): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffe60720a00 a2=0 a3=7ffe607209ec items=0 ppid=2808 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:54.370000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:54.413397 kernel: audit: type=1327 audit(1707435534.370:328): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:54.405000 audit[4808]: NETFILTER_CFG table=nat:133 family=2 entries=78 op=nft_register_rule pid=4808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:54.426973 kernel: audit: type=1325 audit(1707435534.405:329): table=nat:133 family=2 entries=78 op=nft_register_rule pid=4808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:54.405000 audit[4808]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffe60720a00 a2=0 a3=7ffe607209ec items=0 ppid=2808 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:54.451272 kernel: audit: type=1300 audit(1707435534.405:329): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffe60720a00 a2=0 a3=7ffe607209ec items=0 ppid=2808 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:54.452744 kubelet[2651]: E0208 23:38:54.452104 2651 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 8 23:38:54.452744 kubelet[2651]: E0208 23:38:54.452214 2651 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a42be8a2-ca76-44d2-a2fd-a773e658dfb4-calico-apiserver-certs podName:a42be8a2-ca76-44d2-a2fd-a773e658dfb4 nodeName:}" failed. No retries permitted until 2024-02-08 23:38:54.952174046 +0000 UTC m=+98.678936171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a42be8a2-ca76-44d2-a2fd-a773e658dfb4-calico-apiserver-certs") pod "calico-apiserver-f558c5857-67rpl" (UID: "a42be8a2-ca76-44d2-a2fd-a773e658dfb4") : secret "calico-apiserver-certs" not found Feb 8 23:38:54.452744 kubelet[2651]: E0208 23:38:54.452611 2651 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 8 23:38:54.452744 kubelet[2651]: E0208 23:38:54.452652 2651 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89ba79cb-63bd-45d7-b6b1-b9ca0423a5d0-calico-apiserver-certs podName:89ba79cb-63bd-45d7-b6b1-b9ca0423a5d0 nodeName:}" failed. No retries permitted until 2024-02-08 23:38:54.952637458 +0000 UTC m=+98.679399683 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/89ba79cb-63bd-45d7-b6b1-b9ca0423a5d0-calico-apiserver-certs") pod "calico-apiserver-f558c5857-vqwp2" (UID: "89ba79cb-63bd-45d7-b6b1-b9ca0423a5d0") : secret "calico-apiserver-certs" not found Feb 8 23:38:54.405000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:54.462707 kernel: audit: type=1327 audit(1707435534.405:329): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:54.518000 audit[4834]: NETFILTER_CFG table=filter:134 family=2 entries=8 op=nft_register_rule pid=4834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:54.530690 kernel: audit: type=1325 audit(1707435534.518:330): table=filter:134 family=2 entries=8 op=nft_register_rule pid=4834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:54.518000 audit[4834]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffc81463740 a2=0 a3=7ffc8146372c items=0 ppid=2808 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:54.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:54.564089 kernel: audit: type=1300 audit(1707435534.518:330): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffc81463740 a2=0 a3=7ffc8146372c items=0 ppid=2808 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:54.564262 kernel: audit: type=1327 audit(1707435534.518:330): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:54.537000 audit[4834]: NETFILTER_CFG table=nat:135 family=2 entries=78 op=nft_register_rule pid=4834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:54.582704 kernel: audit: type=1325 audit(1707435534.537:331): table=nat:135 family=2 entries=78 op=nft_register_rule pid=4834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:38:54.537000 audit[4834]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffc81463740 a2=0 a3=7ffc8146372c items=0 ppid=2808 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:54.537000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:38:55.061693 env[1405]: time="2024-02-08T23:38:55.060545030Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:55.067398 env[1405]: time="2024-02-08T23:38:55.067345892Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:55.071220 env[1405]: time="2024-02-08T23:38:55.071179883Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:55.074397 env[1405]: time="2024-02-08T23:38:55.074365160Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:38:55.074931 env[1405]: time="2024-02-08T23:38:55.074900272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 8 23:38:55.076932 env[1405]: time="2024-02-08T23:38:55.076903420Z" level=info msg="CreateContainer within sandbox \"86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 8 23:38:55.106903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3572722427.mount: Deactivated successfully. Feb 8 23:38:55.118764 env[1405]: time="2024-02-08T23:38:55.118726419Z" level=info msg="CreateContainer within sandbox \"86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"66dc3c2c6a6da6c518e9c998a4ebec3d23fdd1d4d9ca464230f36520bdf6651b\"" Feb 8 23:38:55.119289 env[1405]: time="2024-02-08T23:38:55.119261131Z" level=info msg="StartContainer for \"66dc3c2c6a6da6c518e9c998a4ebec3d23fdd1d4d9ca464230f36520bdf6651b\"" Feb 8 23:38:55.186969 env[1405]: time="2024-02-08T23:38:55.186924547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f558c5857-67rpl,Uid:a42be8a2-ca76-44d2-a2fd-a773e658dfb4,Namespace:calico-apiserver,Attempt:0,}" Feb 8 23:38:55.199940 env[1405]: time="2024-02-08T23:38:55.199899956Z" level=info msg="StartContainer for \"66dc3c2c6a6da6c518e9c998a4ebec3d23fdd1d4d9ca464230f36520bdf6651b\" returns successfully" Feb 8 23:38:55.202878 env[1405]: time="2024-02-08T23:38:55.202835026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f558c5857-vqwp2,Uid:89ba79cb-63bd-45d7-b6b1-b9ca0423a5d0,Namespace:calico-apiserver,Attempt:0,}" Feb 8 23:38:55.375646 systemd-networkd[1576]: cali2b63f5bb107: Link UP Feb 8 23:38:55.386308 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:38:55.386480 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2b63f5bb107: link becomes ready Feb 8 23:38:55.390920 systemd-networkd[1576]: cali2b63f5bb107: Gained carrier Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.270 [INFO][4876] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--67rpl-eth0 calico-apiserver-f558c5857- calico-apiserver a42be8a2-ca76-44d2-a2fd-a773e658dfb4 897 0 2024-02-08 23:38:54 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f558c5857 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.2-a-9933156126 calico-apiserver-f558c5857-67rpl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2b63f5bb107 [] []}} ContainerID="d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" Namespace="calico-apiserver" Pod="calico-apiserver-f558c5857-67rpl" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--67rpl-" Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.270 [INFO][4876] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" Namespace="calico-apiserver" Pod="calico-apiserver-f558c5857-67rpl" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--67rpl-eth0" Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.322 [INFO][4900] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" HandleID="k8s-pod-network.d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" Workload="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--67rpl-eth0" Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.333 [INFO][4900] ipam_plugin.go 268: Auto assigning IP ContainerID="d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" HandleID="k8s-pod-network.d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" Workload="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--67rpl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000501b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.2-a-9933156126", "pod":"calico-apiserver-f558c5857-67rpl", "timestamp":"2024-02-08 23:38:55.322425781 +0000 UTC"}, Hostname:"ci-3510.3.2-a-9933156126", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.333 [INFO][4900] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.333 [INFO][4900] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.333 [INFO][4900] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-9933156126' Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.335 [INFO][4900] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.340 [INFO][4900] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.343 [INFO][4900] ipam.go 489: Trying affinity for 192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.345 [INFO][4900] ipam.go 155: Attempting to load block cidr=192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.347 [INFO][4900] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.347 [INFO][4900] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.4.64/26 handle="k8s-pod-network.d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.349 [INFO][4900] ipam.go 1682: Creating new handle: k8s-pod-network.d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209 Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.355 [INFO][4900] ipam.go 1203: Writing block in order to claim IPs block=192.168.4.64/26 handle="k8s-pod-network.d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.360 [INFO][4900] ipam.go 1216: Successfully claimed IPs: [192.168.4.69/26] block=192.168.4.64/26 handle="k8s-pod-network.d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.361 [INFO][4900] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.4.69/26] handle="k8s-pod-network.d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.361 [INFO][4900] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:38:55.403392 env[1405]: 2024-02-08 23:38:55.361 [INFO][4900] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.4.69/26] IPv6=[] ContainerID="d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" HandleID="k8s-pod-network.d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" Workload="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--67rpl-eth0" Feb 8 23:38:55.404311 env[1405]: 2024-02-08 23:38:55.367 [INFO][4876] k8s.go 385: Populated endpoint ContainerID="d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" Namespace="calico-apiserver" Pod="calico-apiserver-f558c5857-67rpl" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--67rpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--67rpl-eth0", GenerateName:"calico-apiserver-f558c5857-", Namespace:"calico-apiserver", SelfLink:"", UID:"a42be8a2-ca76-44d2-a2fd-a773e658dfb4", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 38, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f558c5857", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"", Pod:"calico-apiserver-f558c5857-67rpl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b63f5bb107", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:38:55.404311 env[1405]: 2024-02-08 23:38:55.367 [INFO][4876] k8s.go 386: Calico CNI using IPs: [192.168.4.69/32] ContainerID="d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" Namespace="calico-apiserver" Pod="calico-apiserver-f558c5857-67rpl" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--67rpl-eth0" Feb 8 23:38:55.404311 env[1405]: 2024-02-08 23:38:55.367 [INFO][4876] dataplane_linux.go 68: Setting the host side veth name to cali2b63f5bb107 ContainerID="d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" Namespace="calico-apiserver" Pod="calico-apiserver-f558c5857-67rpl" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--67rpl-eth0" Feb 8 23:38:55.404311 env[1405]: 2024-02-08 23:38:55.390 [INFO][4876] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" Namespace="calico-apiserver" Pod="calico-apiserver-f558c5857-67rpl" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--67rpl-eth0" Feb 8 23:38:55.404311 env[1405]: 2024-02-08 23:38:55.392 [INFO][4876] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" Namespace="calico-apiserver" Pod="calico-apiserver-f558c5857-67rpl" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--67rpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--67rpl-eth0", GenerateName:"calico-apiserver-f558c5857-", Namespace:"calico-apiserver", SelfLink:"", UID:"a42be8a2-ca76-44d2-a2fd-a773e658dfb4", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 38, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f558c5857", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209", Pod:"calico-apiserver-f558c5857-67rpl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b63f5bb107", MAC:"d2:88:08:47:70:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:38:55.404311 env[1405]: 2024-02-08 23:38:55.401 [INFO][4876] k8s.go 491: Wrote updated endpoint to datastore ContainerID="d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209" Namespace="calico-apiserver" Pod="calico-apiserver-f558c5857-67rpl" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--67rpl-eth0" Feb 8 23:38:55.440568 systemd-networkd[1576]: caliaad3bde83c3: Link UP Feb 8 23:38:55.446890 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliaad3bde83c3: link becomes ready Feb 8 23:38:55.448201 systemd-networkd[1576]: caliaad3bde83c3: Gained carrier Feb 8 23:38:55.463113 env[1405]: time="2024-02-08T23:38:55.463037138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:38:55.463377 env[1405]: time="2024-02-08T23:38:55.463342545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:38:55.463531 env[1405]: time="2024-02-08T23:38:55.463506549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:38:55.463874 env[1405]: time="2024-02-08T23:38:55.463831857Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209 pid=4937 runtime=io.containerd.runc.v2 Feb 8 23:38:55.464000 audit[4944]: NETFILTER_CFG table=filter:136 family=2 entries=59 op=nft_register_chain pid=4944 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:38:55.464000 audit[4944]: SYSCALL arch=c000003e syscall=46 success=yes exit=29292 a0=3 a1=7ffcf4092150 a2=0 a3=7ffcf409213c items=0 ppid=3877 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:55.464000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.306 [INFO][4886] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--vqwp2-eth0 calico-apiserver-f558c5857- calico-apiserver 89ba79cb-63bd-45d7-b6b1-b9ca0423a5d0 899 0 2024-02-08 23:38:54 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f558c5857 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.2-a-9933156126 calico-apiserver-f558c5857-vqwp2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaad3bde83c3 [] []}} ContainerID="78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" Namespace="calico-apiserver" Pod="calico-apiserver-f558c5857-vqwp2" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--vqwp2-" Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.306 [INFO][4886] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" Namespace="calico-apiserver" Pod="calico-apiserver-f558c5857-vqwp2" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--vqwp2-eth0" Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.362 [INFO][4908] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" HandleID="k8s-pod-network.78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" Workload="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--vqwp2-eth0" Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.383 [INFO][4908] ipam_plugin.go 268: Auto assigning IP ContainerID="78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" HandleID="k8s-pod-network.78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" Workload="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--vqwp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002be840), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.2-a-9933156126", "pod":"calico-apiserver-f558c5857-vqwp2", "timestamp":"2024-02-08 23:38:55.362003926 +0000 UTC"}, Hostname:"ci-3510.3.2-a-9933156126", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.383 [INFO][4908] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.383 [INFO][4908] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.383 [INFO][4908] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-9933156126' Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.391 [INFO][4908] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.411 [INFO][4908] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.415 [INFO][4908] ipam.go 489: Trying affinity for 192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.417 [INFO][4908] ipam.go 155: Attempting to load block cidr=192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.421 [INFO][4908] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.4.64/26 host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.421 [INFO][4908] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.4.64/26 handle="k8s-pod-network.78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.423 [INFO][4908] ipam.go 1682: Creating new handle: k8s-pod-network.78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.426 [INFO][4908] ipam.go 1203: Writing block in order to claim IPs block=192.168.4.64/26 handle="k8s-pod-network.78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.434 [INFO][4908] ipam.go 1216: Successfully claimed IPs: [192.168.4.70/26] block=192.168.4.64/26 handle="k8s-pod-network.78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.435 [INFO][4908] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.4.70/26] handle="k8s-pod-network.78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" host="ci-3510.3.2-a-9933156126" Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.435 [INFO][4908] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:38:55.477829 env[1405]: 2024-02-08 23:38:55.435 [INFO][4908] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.4.70/26] IPv6=[] ContainerID="78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" HandleID="k8s-pod-network.78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" Workload="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--vqwp2-eth0" Feb 8 23:38:55.479180 env[1405]: 2024-02-08 23:38:55.438 [INFO][4886] k8s.go 385: Populated endpoint ContainerID="78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" Namespace="calico-apiserver" Pod="calico-apiserver-f558c5857-vqwp2" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--vqwp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--vqwp2-eth0", GenerateName:"calico-apiserver-f558c5857-", Namespace:"calico-apiserver", SelfLink:"", UID:"89ba79cb-63bd-45d7-b6b1-b9ca0423a5d0", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 38, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f558c5857", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"", Pod:"calico-apiserver-f558c5857-vqwp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaad3bde83c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:38:55.479180 env[1405]: 2024-02-08 23:38:55.438 [INFO][4886] k8s.go 386: Calico CNI using IPs: [192.168.4.70/32] ContainerID="78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" Namespace="calico-apiserver" Pod="calico-apiserver-f558c5857-vqwp2" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--vqwp2-eth0" Feb 8 23:38:55.479180 env[1405]: 2024-02-08 23:38:55.438 [INFO][4886] dataplane_linux.go 68: Setting the host side veth name to caliaad3bde83c3 ContainerID="78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" Namespace="calico-apiserver" Pod="calico-apiserver-f558c5857-vqwp2" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--vqwp2-eth0" Feb 8 23:38:55.479180 env[1405]: 2024-02-08 23:38:55.449 [INFO][4886] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" Namespace="calico-apiserver" Pod="calico-apiserver-f558c5857-vqwp2" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--vqwp2-eth0" Feb 8 23:38:55.479180 env[1405]: 2024-02-08 23:38:55.449 [INFO][4886] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" Namespace="calico-apiserver" Pod="calico-apiserver-f558c5857-vqwp2" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--vqwp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--vqwp2-eth0", GenerateName:"calico-apiserver-f558c5857-", Namespace:"calico-apiserver", SelfLink:"", UID:"89ba79cb-63bd-45d7-b6b1-b9ca0423a5d0", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 38, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f558c5857", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c", Pod:"calico-apiserver-f558c5857-vqwp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaad3bde83c3", MAC:"ea:16:5c:c2:46:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:38:55.479180 env[1405]: 2024-02-08 23:38:55.475 [INFO][4886] k8s.go 491: Wrote updated endpoint to datastore ContainerID="78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c" Namespace="calico-apiserver" Pod="calico-apiserver-f558c5857-vqwp2" WorkloadEndpoint="ci--3510.3.2--a--9933156126-k8s-calico--apiserver--f558c5857--vqwp2-eth0" Feb 8 23:38:55.499000 audit[4964]: NETFILTER_CFG table=filter:137 family=2 entries=56 op=nft_register_chain pid=4964 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 8 23:38:55.499000 audit[4964]: SYSCALL arch=c000003e syscall=46 success=yes exit=27348 a0=3 a1=7ffec8660f80 a2=0 a3=7ffec8660f6c items=0 ppid=3877 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:55.499000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 8 23:38:55.519588 env[1405]: time="2024-02-08T23:38:55.519519886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:38:55.519809 env[1405]: time="2024-02-08T23:38:55.519776792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:38:55.519978 env[1405]: time="2024-02-08T23:38:55.519932496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:38:55.520286 env[1405]: time="2024-02-08T23:38:55.520250904Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c pid=4977 runtime=io.containerd.runc.v2 Feb 8 23:38:55.608297 env[1405]: time="2024-02-08T23:38:55.608246604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f558c5857-67rpl,Uid:a42be8a2-ca76-44d2-a2fd-a773e658dfb4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209\"" Feb 8 23:38:55.609800 env[1405]: time="2024-02-08T23:38:55.609767941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 8 23:38:55.630503 env[1405]: time="2024-02-08T23:38:55.630406533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f558c5857-vqwp2,Uid:89ba79cb-63bd-45d7-b6b1-b9ca0423a5d0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c\"" Feb 8 23:38:55.722187 kubelet[2651]: I0208 23:38:55.722157 2651 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 8 23:38:55.722187 kubelet[2651]: I0208 23:38:55.722202 2651 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 8 23:38:55.936246 kubelet[2651]: I0208 23:38:55.935859 2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-pfb4q" podStartSLOduration=-9.223371954918978e+09 pod.CreationTimestamp="2024-02-08 23:37:34 +0000 UTC" firstStartedPulling="2024-02-08 23:38:37.063920233 +0000 UTC m=+80.790682358" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:38:55.931640825 +0000 UTC m=+99.658402950" watchObservedRunningTime="2024-02-08 23:38:55.935798224 +0000 UTC m=+99.662560349" Feb 8 23:38:56.765114 systemd-networkd[1576]: caliaad3bde83c3: Gained IPv6LL Feb 8 23:38:57.148886 systemd-networkd[1576]: cali2b63f5bb107: Gained IPv6LL Feb 8 23:39:01.638307 env[1405]: time="2024-02-08T23:39:01.638256679Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:01.643121 env[1405]: time="2024-02-08T23:39:01.643084186Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:01.647287 env[1405]: time="2024-02-08T23:39:01.647257379Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:01.650745 env[1405]: time="2024-02-08T23:39:01.650710755Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:01.651420 env[1405]: time="2024-02-08T23:39:01.651389671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 8 23:39:01.654232 env[1405]: time="2024-02-08T23:39:01.653131309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 8 23:39:01.654680 env[1405]: time="2024-02-08T23:39:01.654634043Z" level=info msg="CreateContainer within sandbox \"d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 8 23:39:01.681999 env[1405]: time="2024-02-08T23:39:01.681956150Z" level=info msg="CreateContainer within sandbox \"d82ea1332d3448c555db4e7706737be30aa978f8cd3acf2696346a6e5cafa209\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"041cb74cd331a285874b342702f283f2c5d177c7fbbeb085e3efa355d4ea2706\"" Feb 8 23:39:01.683514 env[1405]: time="2024-02-08T23:39:01.682368859Z" level=info msg="StartContainer for \"041cb74cd331a285874b342702f283f2c5d177c7fbbeb085e3efa355d4ea2706\"" Feb 8 23:39:01.758146 env[1405]: time="2024-02-08T23:39:01.758100541Z" level=info msg="StartContainer for \"041cb74cd331a285874b342702f283f2c5d177c7fbbeb085e3efa355d4ea2706\" returns successfully" Feb 8 23:39:02.031394 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 8 23:39:02.031554 kernel: audit: type=1325 audit(1707435542.014:334): table=filter:138 family=2 entries=8 op=nft_register_rule pid=5089 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:39:02.014000 audit[5089]: NETFILTER_CFG table=filter:138 family=2 entries=8 op=nft_register_rule pid=5089 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:39:02.014000 audit[5089]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffeee3d4010 a2=0 a3=7ffeee3d3ffc items=0 ppid=2808 pid=5089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:02.053266 kernel: audit: type=1300 audit(1707435542.014:334): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffeee3d4010 a2=0 a3=7ffeee3d3ffc items=0 ppid=2808 pid=5089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:02.014000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:39:02.077321 kernel: audit: type=1327 audit(1707435542.014:334): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:39:02.077507 kernel: audit: type=1325 audit(1707435542.054:335): table=nat:139 family=2 entries=78 op=nft_register_rule pid=5089 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:39:02.054000 audit[5089]: NETFILTER_CFG table=nat:139 family=2 entries=78 op=nft_register_rule pid=5089 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:39:02.054000 audit[5089]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffeee3d4010 a2=0 a3=7ffeee3d3ffc items=0 ppid=2808 pid=5089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:02.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:39:02.108180 kernel: audit: type=1300 audit(1707435542.054:335): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffeee3d4010 a2=0 a3=7ffeee3d3ffc items=0 ppid=2808 pid=5089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:02.108261 kernel: audit: type=1327 audit(1707435542.054:335): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:39:02.232015 env[1405]: time="2024-02-08T23:39:02.231961307Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:02.238135 env[1405]: time="2024-02-08T23:39:02.238089542Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:02.244450 env[1405]: time="2024-02-08T23:39:02.244411781Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:02.249314 env[1405]: time="2024-02-08T23:39:02.249278588Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:02.250353 env[1405]: time="2024-02-08T23:39:02.250317910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 8 23:39:02.257311 env[1405]: time="2024-02-08T23:39:02.257274863Z" level=info msg="CreateContainer within sandbox \"78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 8 23:39:02.293168 env[1405]: time="2024-02-08T23:39:02.293081949Z" level=info msg="CreateContainer within sandbox \"78e26681fcc10571cf1ecec291733f0e81c64f1648def883bd8cc815c566749c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bdd391fc1d7faad6c3679ce331e0dfb2e9a82712ffae625e3980bc14ae27f290\"" Feb 8 23:39:02.294395 env[1405]: time="2024-02-08T23:39:02.294366678Z" level=info msg="StartContainer for \"bdd391fc1d7faad6c3679ce331e0dfb2e9a82712ffae625e3980bc14ae27f290\"" Feb 8 23:39:02.401963 env[1405]: time="2024-02-08T23:39:02.401903038Z" level=info msg="StartContainer for \"bdd391fc1d7faad6c3679ce331e0dfb2e9a82712ffae625e3980bc14ae27f290\" returns successfully" Feb 8 23:39:02.950558 kubelet[2651]: I0208 23:39:02.950517 2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f558c5857-67rpl" podStartSLOduration=-9.223372027904299e+09 pod.CreationTimestamp="2024-02-08 23:38:54 +0000 UTC" firstStartedPulling="2024-02-08 23:38:55.60931973 +0000 UTC m=+99.336081955" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:39:01.95387739 +0000 UTC m=+105.680639515" watchObservedRunningTime="2024-02-08 23:39:02.950477482 +0000 UTC m=+106.677239607" Feb 8 23:39:03.046000 audit[5150]: NETFILTER_CFG table=filter:140 family=2 entries=8 op=nft_register_rule pid=5150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:39:03.059691 kernel: audit: type=1325 audit(1707435543.046:336): table=filter:140 family=2 entries=8 op=nft_register_rule pid=5150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:39:03.046000 audit[5150]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffda17cc6c0 a2=0 a3=7ffda17cc6ac items=0 ppid=2808 pid=5150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:03.046000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:39:03.092592 kernel: audit: type=1300 audit(1707435543.046:336): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffda17cc6c0 a2=0 a3=7ffda17cc6ac items=0 ppid=2808 pid=5150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:03.092705 kernel: audit: type=1327 audit(1707435543.046:336): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:39:03.092732 kernel: audit: type=1325 audit(1707435543.064:337): table=nat:141 family=2 entries=78 op=nft_register_rule pid=5150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:39:03.064000 audit[5150]: NETFILTER_CFG table=nat:141 family=2 entries=78 op=nft_register_rule pid=5150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:39:03.064000 audit[5150]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffda17cc6c0 a2=0 a3=7ffda17cc6ac items=0 ppid=2808 pid=5150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:03.064000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:39:16.410292 env[1405]: time="2024-02-08T23:39:16.410244236Z" level=info msg="StopPodSandbox for \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\"" Feb 8 23:39:16.478688 env[1405]: 2024-02-08 23:39:16.451 [WARNING][5197] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"f03ff7cf-0bee-448e-9a60-80431e41383c", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 37, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592", Pod:"coredns-787d4945fb-7nz52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50556f6b91f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:39:16.478688 env[1405]: 2024-02-08 23:39:16.452 [INFO][5197] k8s.go 578: Cleaning up netns ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Feb 8 23:39:16.478688 env[1405]: 2024-02-08 23:39:16.452 [INFO][5197] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" iface="eth0" netns="" Feb 8 23:39:16.478688 env[1405]: 2024-02-08 23:39:16.452 [INFO][5197] k8s.go 585: Releasing IP address(es) ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Feb 8 23:39:16.478688 env[1405]: 2024-02-08 23:39:16.452 [INFO][5197] utils.go 188: Calico CNI releasing IP address ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Feb 8 23:39:16.478688 env[1405]: 2024-02-08 23:39:16.469 [INFO][5203] ipam_plugin.go 415: Releasing address using handleID ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" HandleID="k8s-pod-network.a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" Feb 8 23:39:16.478688 env[1405]: 2024-02-08 23:39:16.469 [INFO][5203] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:39:16.478688 env[1405]: 2024-02-08 23:39:16.469 [INFO][5203] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:39:16.478688 env[1405]: 2024-02-08 23:39:16.475 [WARNING][5203] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" HandleID="k8s-pod-network.a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" Feb 8 23:39:16.478688 env[1405]: 2024-02-08 23:39:16.475 [INFO][5203] ipam_plugin.go 443: Releasing address using workloadID ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" HandleID="k8s-pod-network.a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" Feb 8 23:39:16.478688 env[1405]: 2024-02-08 23:39:16.476 [INFO][5203] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:39:16.478688 env[1405]: 2024-02-08 23:39:16.477 [INFO][5197] k8s.go 591: Teardown processing complete. ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Feb 8 23:39:16.479368 env[1405]: time="2024-02-08T23:39:16.478713016Z" level=info msg="TearDown network for sandbox \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\" successfully" Feb 8 23:39:16.479368 env[1405]: time="2024-02-08T23:39:16.478755317Z" level=info msg="StopPodSandbox for \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\" returns successfully" Feb 8 23:39:16.479518 env[1405]: time="2024-02-08T23:39:16.479486231Z" level=info msg="RemovePodSandbox for \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\"" Feb 8 23:39:16.479586 env[1405]: time="2024-02-08T23:39:16.479525532Z" level=info msg="Forcibly stopping sandbox \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\"" Feb 8 23:39:16.541372 env[1405]: 2024-02-08 23:39:16.510 [WARNING][5221] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"f03ff7cf-0bee-448e-9a60-80431e41383c", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 37, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"411fc85a620b2b6366a99561472f6905b3609c67b2421c7e6222dbe8f890c592", Pod:"coredns-787d4945fb-7nz52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50556f6b91f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:39:16.541372 env[1405]: 2024-02-08 23:39:16.510 [INFO][5221] k8s.go 578: Cleaning up netns ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Feb 8 23:39:16.541372 env[1405]: 2024-02-08 23:39:16.510 [INFO][5221] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" iface="eth0" netns="" Feb 8 23:39:16.541372 env[1405]: 2024-02-08 23:39:16.510 [INFO][5221] k8s.go 585: Releasing IP address(es) ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Feb 8 23:39:16.541372 env[1405]: 2024-02-08 23:39:16.510 [INFO][5221] utils.go 188: Calico CNI releasing IP address ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Feb 8 23:39:16.541372 env[1405]: 2024-02-08 23:39:16.532 [INFO][5227] ipam_plugin.go 415: Releasing address using handleID ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" HandleID="k8s-pod-network.a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" Feb 8 23:39:16.541372 env[1405]: 2024-02-08 23:39:16.532 [INFO][5227] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:39:16.541372 env[1405]: 2024-02-08 23:39:16.532 [INFO][5227] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:39:16.541372 env[1405]: 2024-02-08 23:39:16.538 [WARNING][5227] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" HandleID="k8s-pod-network.a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" Feb 8 23:39:16.541372 env[1405]: 2024-02-08 23:39:16.538 [INFO][5227] ipam_plugin.go 443: Releasing address using workloadID ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" HandleID="k8s-pod-network.a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--7nz52-eth0" Feb 8 23:39:16.541372 env[1405]: 2024-02-08 23:39:16.539 [INFO][5227] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:39:16.541372 env[1405]: 2024-02-08 23:39:16.540 [INFO][5221] k8s.go 591: Teardown processing complete. ContainerID="a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431" Feb 8 23:39:16.542058 env[1405]: time="2024-02-08T23:39:16.541411489Z" level=info msg="TearDown network for sandbox \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\" successfully" Feb 8 23:39:16.551767 env[1405]: time="2024-02-08T23:39:16.551729682Z" level=info msg="RemovePodSandbox \"a1830767198d420c4b5c78d195f5b6fc3a65416c796dcce0e0313f96d275a431\" returns successfully" Feb 8 23:39:16.552403 env[1405]: time="2024-02-08T23:39:16.552372894Z" level=info msg="StopPodSandbox for \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\"" Feb 8 23:39:16.611553 env[1405]: 2024-02-08 23:39:16.583 [WARNING][5246] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b23333af-8873-429e-8aa7-941ea237b3cf", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 37, 34, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf", Pod:"csi-node-driver-pfb4q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.4.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali75932020261", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:39:16.611553 env[1405]: 2024-02-08 23:39:16.583 [INFO][5246] k8s.go 578: Cleaning up netns ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Feb 8 23:39:16.611553 env[1405]: 2024-02-08 23:39:16.583 [INFO][5246] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" iface="eth0" netns="" Feb 8 23:39:16.611553 env[1405]: 2024-02-08 23:39:16.583 [INFO][5246] k8s.go 585: Releasing IP address(es) ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Feb 8 23:39:16.611553 env[1405]: 2024-02-08 23:39:16.583 [INFO][5246] utils.go 188: Calico CNI releasing IP address ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Feb 8 23:39:16.611553 env[1405]: 2024-02-08 23:39:16.602 [INFO][5252] ipam_plugin.go 415: Releasing address using handleID ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" HandleID="k8s-pod-network.248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Workload="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" Feb 8 23:39:16.611553 env[1405]: 2024-02-08 23:39:16.603 [INFO][5252] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:39:16.611553 env[1405]: 2024-02-08 23:39:16.603 [INFO][5252] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:39:16.611553 env[1405]: 2024-02-08 23:39:16.608 [WARNING][5252] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" HandleID="k8s-pod-network.248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Workload="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" Feb 8 23:39:16.611553 env[1405]: 2024-02-08 23:39:16.608 [INFO][5252] ipam_plugin.go 443: Releasing address using workloadID ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" HandleID="k8s-pod-network.248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Workload="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" Feb 8 23:39:16.611553 env[1405]: 2024-02-08 23:39:16.609 [INFO][5252] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:39:16.611553 env[1405]: 2024-02-08 23:39:16.610 [INFO][5246] k8s.go 591: Teardown processing complete. ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Feb 8 23:39:16.612189 env[1405]: time="2024-02-08T23:39:16.612144711Z" level=info msg="TearDown network for sandbox \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\" successfully" Feb 8 23:39:16.612278 env[1405]: time="2024-02-08T23:39:16.612186212Z" level=info msg="StopPodSandbox for \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\" returns successfully" Feb 8 23:39:16.612774 env[1405]: time="2024-02-08T23:39:16.612743022Z" level=info msg="RemovePodSandbox for \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\"" Feb 8 23:39:16.612980 env[1405]: time="2024-02-08T23:39:16.612916625Z" level=info msg="Forcibly stopping sandbox \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\"" Feb 8 23:39:16.681417 env[1405]: 2024-02-08 23:39:16.643 [WARNING][5271] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b23333af-8873-429e-8aa7-941ea237b3cf", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 37, 34, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"86c586e9ce40e8b94f9146df694d8764456b3dc2aedcf7542e5cd9fe361ffccf", Pod:"csi-node-driver-pfb4q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.4.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali75932020261", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:39:16.681417 env[1405]: 2024-02-08 23:39:16.643 [INFO][5271] k8s.go 578: Cleaning up netns ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Feb 8 23:39:16.681417 env[1405]: 2024-02-08 23:39:16.643 [INFO][5271] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" iface="eth0" netns="" Feb 8 23:39:16.681417 env[1405]: 2024-02-08 23:39:16.643 [INFO][5271] k8s.go 585: Releasing IP address(es) ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Feb 8 23:39:16.681417 env[1405]: 2024-02-08 23:39:16.643 [INFO][5271] utils.go 188: Calico CNI releasing IP address ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Feb 8 23:39:16.681417 env[1405]: 2024-02-08 23:39:16.671 [INFO][5277] ipam_plugin.go 415: Releasing address using handleID ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" HandleID="k8s-pod-network.248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Workload="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" Feb 8 23:39:16.681417 env[1405]: 2024-02-08 23:39:16.671 [INFO][5277] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:39:16.681417 env[1405]: 2024-02-08 23:39:16.671 [INFO][5277] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:39:16.681417 env[1405]: 2024-02-08 23:39:16.677 [WARNING][5277] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" HandleID="k8s-pod-network.248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Workload="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" Feb 8 23:39:16.681417 env[1405]: 2024-02-08 23:39:16.677 [INFO][5277] ipam_plugin.go 443: Releasing address using workloadID ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" HandleID="k8s-pod-network.248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Workload="ci--3510.3.2--a--9933156126-k8s-csi--node--driver--pfb4q-eth0" Feb 8 23:39:16.681417 env[1405]: 2024-02-08 23:39:16.678 [INFO][5277] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:39:16.681417 env[1405]: 2024-02-08 23:39:16.679 [INFO][5271] k8s.go 591: Teardown processing complete. ContainerID="248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9" Feb 8 23:39:16.682723 env[1405]: time="2024-02-08T23:39:16.681380106Z" level=info msg="TearDown network for sandbox \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\" successfully" Feb 8 23:39:16.689885 env[1405]: time="2024-02-08T23:39:16.689846464Z" level=info msg="RemovePodSandbox \"248524723706105a537a09f266f77c4743ab8559911cac1bb0c36c4af7bd93a9\" returns successfully" Feb 8 23:39:16.690421 env[1405]: time="2024-02-08T23:39:16.690385574Z" level=info msg="StopPodSandbox for \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\"" Feb 8 23:39:16.757161 env[1405]: 2024-02-08 23:39:16.729 [WARNING][5297] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0", GenerateName:"calico-kube-controllers-868b7ffccf-", Namespace:"calico-system", SelfLink:"", UID:"c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 37, 34, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"868b7ffccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e", Pod:"calico-kube-controllers-868b7ffccf-pz49r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliace42da70cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:39:16.757161 env[1405]: 2024-02-08 23:39:16.729 [INFO][5297] k8s.go 578: Cleaning up netns ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Feb 8 23:39:16.757161 env[1405]: 2024-02-08 23:39:16.729 [INFO][5297] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" iface="eth0" netns="" Feb 8 23:39:16.757161 env[1405]: 2024-02-08 23:39:16.730 [INFO][5297] k8s.go 585: Releasing IP address(es) ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Feb 8 23:39:16.757161 env[1405]: 2024-02-08 23:39:16.730 [INFO][5297] utils.go 188: Calico CNI releasing IP address ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Feb 8 23:39:16.757161 env[1405]: 2024-02-08 23:39:16.748 [INFO][5303] ipam_plugin.go 415: Releasing address using handleID ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" HandleID="k8s-pod-network.14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Workload="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" Feb 8 23:39:16.757161 env[1405]: 2024-02-08 23:39:16.748 [INFO][5303] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:39:16.757161 env[1405]: 2024-02-08 23:39:16.748 [INFO][5303] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:39:16.757161 env[1405]: 2024-02-08 23:39:16.753 [WARNING][5303] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" HandleID="k8s-pod-network.14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Workload="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" Feb 8 23:39:16.757161 env[1405]: 2024-02-08 23:39:16.754 [INFO][5303] ipam_plugin.go 443: Releasing address using workloadID ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" HandleID="k8s-pod-network.14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Workload="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" Feb 8 23:39:16.757161 env[1405]: 2024-02-08 23:39:16.755 [INFO][5303] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:39:16.757161 env[1405]: 2024-02-08 23:39:16.756 [INFO][5297] k8s.go 591: Teardown processing complete. ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Feb 8 23:39:16.757859 env[1405]: time="2024-02-08T23:39:16.757200823Z" level=info msg="TearDown network for sandbox \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\" successfully" Feb 8 23:39:16.757859 env[1405]: time="2024-02-08T23:39:16.757237324Z" level=info msg="StopPodSandbox for \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\" returns successfully" Feb 8 23:39:16.757859 env[1405]: time="2024-02-08T23:39:16.757798834Z" level=info msg="RemovePodSandbox for \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\"" Feb 8 23:39:16.757979 env[1405]: time="2024-02-08T23:39:16.757837435Z" level=info msg="Forcibly stopping sandbox \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\"" Feb 8 23:39:16.821283 env[1405]: 2024-02-08 23:39:16.788 [WARNING][5321] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0", GenerateName:"calico-kube-controllers-868b7ffccf-", Namespace:"calico-system", SelfLink:"", UID:"c7e20c77-e9f3-4ad8-8c12-2fafbaaed94b", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 37, 34, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"868b7ffccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"ec29a6121ce579bd73354c916d58b8deb4c33c110e106af1a75343d333e5c05e", Pod:"calico-kube-controllers-868b7ffccf-pz49r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliace42da70cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:39:16.821283 env[1405]: 2024-02-08 23:39:16.788 [INFO][5321] k8s.go 578: Cleaning up netns ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Feb 8 23:39:16.821283 env[1405]: 2024-02-08 23:39:16.788 [INFO][5321] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" iface="eth0" netns="" Feb 8 23:39:16.821283 env[1405]: 2024-02-08 23:39:16.788 [INFO][5321] k8s.go 585: Releasing IP address(es) ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Feb 8 23:39:16.821283 env[1405]: 2024-02-08 23:39:16.788 [INFO][5321] utils.go 188: Calico CNI releasing IP address ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Feb 8 23:39:16.821283 env[1405]: 2024-02-08 23:39:16.808 [INFO][5328] ipam_plugin.go 415: Releasing address using handleID ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" HandleID="k8s-pod-network.14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Workload="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" Feb 8 23:39:16.821283 env[1405]: 2024-02-08 23:39:16.808 [INFO][5328] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:39:16.821283 env[1405]: 2024-02-08 23:39:16.808 [INFO][5328] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:39:16.821283 env[1405]: 2024-02-08 23:39:16.814 [WARNING][5328] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" HandleID="k8s-pod-network.14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Workload="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" Feb 8 23:39:16.821283 env[1405]: 2024-02-08 23:39:16.818 [INFO][5328] ipam_plugin.go 443: Releasing address using workloadID ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" HandleID="k8s-pod-network.14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Workload="ci--3510.3.2--a--9933156126-k8s-calico--kube--controllers--868b7ffccf--pz49r-eth0" Feb 8 23:39:16.821283 env[1405]: 2024-02-08 23:39:16.819 [INFO][5328] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:39:16.821283 env[1405]: 2024-02-08 23:39:16.820 [INFO][5321] k8s.go 591: Teardown processing complete. ContainerID="14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d" Feb 8 23:39:16.821948 env[1405]: time="2024-02-08T23:39:16.821314722Z" level=info msg="TearDown network for sandbox \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\" successfully" Feb 8 23:39:16.827652 env[1405]: time="2024-02-08T23:39:16.827608839Z" level=info msg="RemovePodSandbox \"14e3d87bc71e55645cef1fa9761a8c20519c68b89dd925b46c45cbca3211468d\" returns successfully" Feb 8 23:39:16.828126 env[1405]: time="2024-02-08T23:39:16.828096549Z" level=info msg="StopPodSandbox for \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\"" Feb 8 23:39:16.884517 env[1405]: 2024-02-08 23:39:16.857 [WARNING][5346] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"d47f938b-76c5-40ea-8321-fd8530afd202", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 37, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81", Pod:"coredns-787d4945fb-r5zfj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4dbccb8f7dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:39:16.884517 env[1405]: 2024-02-08 23:39:16.858 [INFO][5346] k8s.go 578: Cleaning up netns ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Feb 8 23:39:16.884517 env[1405]: 2024-02-08 23:39:16.858 [INFO][5346] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" iface="eth0" netns="" Feb 8 23:39:16.884517 env[1405]: 2024-02-08 23:39:16.858 [INFO][5346] k8s.go 585: Releasing IP address(es) ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Feb 8 23:39:16.884517 env[1405]: 2024-02-08 23:39:16.858 [INFO][5346] utils.go 188: Calico CNI releasing IP address ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Feb 8 23:39:16.884517 env[1405]: 2024-02-08 23:39:16.875 [INFO][5352] ipam_plugin.go 415: Releasing address using handleID ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" HandleID="k8s-pod-network.576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" Feb 8 23:39:16.884517 env[1405]: 2024-02-08 23:39:16.875 [INFO][5352] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:39:16.884517 env[1405]: 2024-02-08 23:39:16.875 [INFO][5352] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:39:16.884517 env[1405]: 2024-02-08 23:39:16.880 [WARNING][5352] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" HandleID="k8s-pod-network.576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" Feb 8 23:39:16.884517 env[1405]: 2024-02-08 23:39:16.881 [INFO][5352] ipam_plugin.go 443: Releasing address using workloadID ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" HandleID="k8s-pod-network.576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" Feb 8 23:39:16.884517 env[1405]: 2024-02-08 23:39:16.882 [INFO][5352] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:39:16.884517 env[1405]: 2024-02-08 23:39:16.883 [INFO][5346] k8s.go 591: Teardown processing complete. ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Feb 8 23:39:16.885175 env[1405]: time="2024-02-08T23:39:16.884552804Z" level=info msg="TearDown network for sandbox \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\" successfully" Feb 8 23:39:16.885175 env[1405]: time="2024-02-08T23:39:16.884590505Z" level=info msg="StopPodSandbox for \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\" returns successfully" Feb 8 23:39:16.885483 env[1405]: time="2024-02-08T23:39:16.885444521Z" level=info msg="RemovePodSandbox for \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\"" Feb 8 23:39:16.885595 env[1405]: time="2024-02-08T23:39:16.885488122Z" level=info msg="Forcibly stopping sandbox \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\"" Feb 8 23:39:16.969505 env[1405]: 2024-02-08 23:39:16.937 [WARNING][5372] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"d47f938b-76c5-40ea-8321-fd8530afd202", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.February, 8, 23, 37, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-9933156126", ContainerID:"1eec32102642769e413054b8274bc1740bdc15ddcc3b42b208788f56e155dc81", Pod:"coredns-787d4945fb-r5zfj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4dbccb8f7dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 8 23:39:16.969505 env[1405]: 2024-02-08 23:39:16.937 [INFO][5372] k8s.go 578: Cleaning up netns ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Feb 8 23:39:16.969505 env[1405]: 2024-02-08 23:39:16.937 [INFO][5372] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" iface="eth0" netns="" Feb 8 23:39:16.969505 env[1405]: 2024-02-08 23:39:16.937 [INFO][5372] k8s.go 585: Releasing IP address(es) ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Feb 8 23:39:16.969505 env[1405]: 2024-02-08 23:39:16.937 [INFO][5372] utils.go 188: Calico CNI releasing IP address ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Feb 8 23:39:16.969505 env[1405]: 2024-02-08 23:39:16.956 [INFO][5378] ipam_plugin.go 415: Releasing address using handleID ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" HandleID="k8s-pod-network.576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" Feb 8 23:39:16.969505 env[1405]: 2024-02-08 23:39:16.956 [INFO][5378] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 8 23:39:16.969505 env[1405]: 2024-02-08 23:39:16.956 [INFO][5378] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 8 23:39:16.969505 env[1405]: 2024-02-08 23:39:16.963 [WARNING][5378] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" HandleID="k8s-pod-network.576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" Feb 8 23:39:16.969505 env[1405]: 2024-02-08 23:39:16.963 [INFO][5378] ipam_plugin.go 443: Releasing address using workloadID ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" HandleID="k8s-pod-network.576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Workload="ci--3510.3.2--a--9933156126-k8s-coredns--787d4945fb--r5zfj-eth0" Feb 8 23:39:16.969505 env[1405]: 2024-02-08 23:39:16.965 [INFO][5378] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 8 23:39:16.969505 env[1405]: 2024-02-08 23:39:16.968 [INFO][5372] k8s.go 591: Teardown processing complete. ContainerID="576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968" Feb 8 23:39:16.970198 env[1405]: time="2024-02-08T23:39:16.969525593Z" level=info msg="TearDown network for sandbox \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\" successfully" Feb 8 23:39:16.976416 env[1405]: time="2024-02-08T23:39:16.976380721Z" level=info msg="RemovePodSandbox \"576142aca21b68cc498501cf088ea4f9b30ce142623a64ad28b4c88568b2c968\" returns successfully" Feb 8 23:39:21.196951 systemd[1]: run-containerd-runc-k8s.io-893c70826461f65a9b909e05aa1fcd546fa3cf6e59d661751c1c1c04a2756ce9-runc.Q7mZ82.mount: Deactivated successfully. Feb 8 23:39:21.246195 systemd[1]: run-containerd-runc-k8s.io-893c70826461f65a9b909e05aa1fcd546fa3cf6e59d661751c1c1c04a2756ce9-runc.XnVgfB.mount: Deactivated successfully. Feb 8 23:39:25.214930 systemd[1]: run-containerd-runc-k8s.io-041cb74cd331a285874b342702f283f2c5d177c7fbbeb085e3efa355d4ea2706-runc.ynWKzB.mount: Deactivated successfully. Feb 8 23:39:25.287540 kubelet[2651]: I0208 23:39:25.286934 2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f558c5857-vqwp2" podStartSLOduration=-9.223372005567888e+09 pod.CreationTimestamp="2024-02-08 23:38:54 +0000 UTC" firstStartedPulling="2024-02-08 23:38:55.631706764 +0000 UTC m=+99.358468889" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:39:02.951015993 +0000 UTC m=+106.677778218" watchObservedRunningTime="2024-02-08 23:39:25.286887637 +0000 UTC m=+129.013649762" Feb 8 23:39:25.366728 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 8 23:39:25.366863 kernel: audit: type=1325 audit(1707435565.356:338): table=filter:142 family=2 entries=7 op=nft_register_rule pid=5489 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:39:25.356000 audit[5489]: NETFILTER_CFG table=filter:142 family=2 entries=7 op=nft_register_rule pid=5489 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:39:25.373696 kernel: audit: type=1300 audit(1707435565.356:338): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff2271d480 a2=0 a3=7fff2271d46c items=0 ppid=2808 pid=5489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:25.356000 audit[5489]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff2271d480 a2=0 a3=7fff2271d46c items=0 ppid=2808 pid=5489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:25.356000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:39:25.356000 audit[5489]: NETFILTER_CFG table=nat:143 family=2 entries=85 op=nft_register_chain pid=5489 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:39:25.411474 kernel: audit: type=1327 audit(1707435565.356:338): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:39:25.411560 kernel: audit: type=1325 audit(1707435565.356:339): table=nat:143 family=2 entries=85 op=nft_register_chain pid=5489 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:39:25.411590 kernel: audit: type=1300 audit(1707435565.356:339): arch=c000003e syscall=46 success=yes exit=28484 a0=3 a1=7fff2271d480 a2=0 a3=7fff2271d46c items=0 ppid=2808 pid=5489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:25.356000 audit[5489]: SYSCALL arch=c000003e syscall=46 success=yes exit=28484 a0=3 a1=7fff2271d480 a2=0 a3=7fff2271d46c items=0 ppid=2808 pid=5489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:25.356000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:39:25.440467 kernel: audit: type=1327 audit(1707435565.356:339): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:39:25.490000 audit[5515]: NETFILTER_CFG table=filter:144 family=2 entries=6 op=nft_register_rule pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:39:25.490000 audit[5515]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe6b684780 a2=0 a3=7ffe6b68476c items=0 ppid=2808 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:25.520367 kernel: audit: type=1325 audit(1707435565.490:340): table=filter:144 family=2 entries=6 op=nft_register_rule pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:39:25.520476 kernel: audit: type=1300 audit(1707435565.490:340): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe6b684780 a2=0 a3=7ffe6b68476c items=0 ppid=2808 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:25.490000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:39:25.493000 audit[5515]: NETFILTER_CFG table=nat:145 family=2 entries=92 op=nft_register_chain pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:39:25.542947 kernel: audit: type=1327 audit(1707435565.490:340): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:39:25.543018 kernel: audit: type=1325 audit(1707435565.493:341): table=nat:145 family=2 entries=92 op=nft_register_chain pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:39:25.493000 audit[5515]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffe6b684780 a2=0 a3=7ffe6b68476c items=0 ppid=2808 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:39:25.493000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:39:26.203641 systemd[1]: run-containerd-runc-k8s.io-bdd391fc1d7faad6c3679ce331e0dfb2e9a82712ffae625e3980bc14ae27f290-runc.QkUB4r.mount: Deactivated successfully. Feb 8 23:39:35.536582 systemd[1]: run-containerd-runc-k8s.io-a1dc31ac7ad67788dcc47efdfc725038082aa67b9f2b203c8fd82332a70da0ac-runc.LEm0W8.mount: Deactivated successfully. Feb 8 23:39:51.194583 systemd[1]: run-containerd-runc-k8s.io-893c70826461f65a9b909e05aa1fcd546fa3cf6e59d661751c1c1c04a2756ce9-runc.oVdXEl.mount: Deactivated successfully. Feb 8 23:39:55.243937 systemd[1]: run-containerd-runc-k8s.io-bdd391fc1d7faad6c3679ce331e0dfb2e9a82712ffae625e3980bc14ae27f290-runc.noJgRt.mount: Deactivated successfully. Feb 8 23:39:59.471102 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 8 23:39:59.471264 kernel: audit: type=1130 audit(1707435599.449:342): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.4:22-10.200.12.6:34934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.4:22-10.200.12.6:34934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.450219 systemd[1]: Started sshd@7-10.200.8.4:22-10.200.12.6:34934.service. Feb 8 23:40:00.062000 audit[5615]: USER_ACCT pid=5615 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:00.080342 sshd[5615]: Accepted publickey for core from 10.200.12.6 port 34934 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:00.080864 kernel: audit: type=1101 audit(1707435600.062:343): pid=5615 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:00.080000 audit[5615]: CRED_ACQ pid=5615 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:00.081267 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:00.086424 systemd-logind[1392]: New session 10 of user core. Feb 8 23:40:00.089252 systemd[1]: Started session-10.scope. Feb 8 23:40:00.106740 kernel: audit: type=1103 audit(1707435600.080:344): pid=5615 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:00.106828 kernel: audit: type=1006 audit(1707435600.080:345): pid=5615 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Feb 8 23:40:00.080000 audit[5615]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff72a1d360 a2=3 a3=0 items=0 ppid=1 pid=5615 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:00.122302 kernel: audit: type=1300 audit(1707435600.080:345): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff72a1d360 a2=3 a3=0 items=0 ppid=1 pid=5615 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:00.080000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:00.122702 kernel: audit: type=1327 audit(1707435600.080:345): proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:00.092000 audit[5615]: USER_START pid=5615 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:00.128683 kernel: audit: type=1105 audit(1707435600.092:346): pid=5615 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:00.092000 audit[5620]: CRED_ACQ pid=5620 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:00.144729 kernel: audit: type=1103 audit(1707435600.092:347): pid=5620 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:00.613785 sshd[5615]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:00.615000 audit[5615]: USER_END pid=5615 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:00.619012 systemd[1]: sshd@7-10.200.8.4:22-10.200.12.6:34934.service: Deactivated successfully. Feb 8 23:40:00.621075 systemd[1]: session-10.scope: Deactivated successfully. Feb 8 23:40:00.621925 systemd-logind[1392]: Session 10 logged out. Waiting for processes to exit. Feb 8 23:40:00.623022 systemd-logind[1392]: Removed session 10. Feb 8 23:40:00.615000 audit[5615]: CRED_DISP pid=5615 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:00.648506 kernel: audit: type=1106 audit(1707435600.615:348): pid=5615 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:00.648622 kernel: audit: type=1104 audit(1707435600.615:349): pid=5615 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:00.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.4:22-10.200.12.6:34934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:05.718228 systemd[1]: Started sshd@8-10.200.8.4:22-10.200.12.6:34942.service. Feb 8 23:40:05.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.4:22-10.200.12.6:34942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:05.725029 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:40:05.725093 kernel: audit: type=1130 audit(1707435605.718:351): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.4:22-10.200.12.6:34942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:06.331000 audit[5651]: USER_ACCT pid=5651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:06.332699 sshd[5651]: Accepted publickey for core from 10.200.12.6 port 34942 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:06.350693 kernel: audit: type=1101 audit(1707435606.331:352): pid=5651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:06.350000 audit[5651]: CRED_ACQ pid=5651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:06.351310 sshd[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:06.356791 systemd-logind[1392]: New session 11 of user core. Feb 8 23:40:06.357966 systemd[1]: Started session-11.scope. Feb 8 23:40:06.367983 kernel: audit: type=1103 audit(1707435606.350:353): pid=5651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:06.350000 audit[5651]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc98e3e7f0 a2=3 a3=0 items=0 ppid=1 pid=5651 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:06.379693 kernel: audit: type=1006 audit(1707435606.350:354): pid=5651 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 8 23:40:06.379740 kernel: audit: type=1300 audit(1707435606.350:354): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc98e3e7f0 a2=3 a3=0 items=0 ppid=1 pid=5651 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:06.397284 kernel: audit: type=1327 audit(1707435606.350:354): proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:06.350000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:06.361000 audit[5651]: USER_START pid=5651 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:06.418883 kernel: audit: type=1105 audit(1707435606.361:355): pid=5651 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:06.368000 audit[5653]: CRED_ACQ pid=5653 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:06.433123 kernel: audit: type=1103 audit(1707435606.368:356): pid=5653 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:06.831370 sshd[5651]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:06.832000 audit[5651]: USER_END pid=5651 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:06.835160 systemd[1]: sshd@8-10.200.8.4:22-10.200.12.6:34942.service: Deactivated successfully. Feb 8 23:40:06.836164 systemd[1]: session-11.scope: Deactivated successfully. Feb 8 23:40:06.837796 systemd-logind[1392]: Session 11 logged out. Waiting for processes to exit. Feb 8 23:40:06.838656 systemd-logind[1392]: Removed session 11. Feb 8 23:40:06.832000 audit[5651]: CRED_DISP pid=5651 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:06.865291 kernel: audit: type=1106 audit(1707435606.832:357): pid=5651 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:06.865373 kernel: audit: type=1104 audit(1707435606.832:358): pid=5651 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:06.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.4:22-10.200.12.6:34942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:11.935747 systemd[1]: Started sshd@9-10.200.8.4:22-10.200.12.6:45012.service. Feb 8 23:40:11.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.4:22-10.200.12.6:45012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:11.942110 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:40:11.942203 kernel: audit: type=1130 audit(1707435611.935:360): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.4:22-10.200.12.6:45012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:12.550000 audit[5677]: USER_ACCT pid=5677 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:12.551550 sshd[5677]: Accepted publickey for core from 10.200.12.6 port 45012 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:12.568692 kernel: audit: type=1101 audit(1707435612.550:361): pid=5677 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:12.569507 sshd[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:12.568000 audit[5677]: CRED_ACQ pid=5677 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:12.579798 systemd[1]: Started session-12.scope. Feb 8 23:40:12.580910 systemd-logind[1392]: New session 12 of user core. Feb 8 23:40:12.586821 kernel: audit: type=1103 audit(1707435612.568:362): pid=5677 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:12.568000 audit[5677]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec73619b0 a2=3 a3=0 items=0 ppid=1 pid=5677 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:12.613039 kernel: audit: type=1006 audit(1707435612.568:363): pid=5677 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Feb 8 23:40:12.613131 kernel: audit: type=1300 audit(1707435612.568:363): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec73619b0 a2=3 a3=0 items=0 ppid=1 pid=5677 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:12.568000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:12.618824 kernel: audit: type=1327 audit(1707435612.568:363): proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:12.618892 kernel: audit: type=1105 audit(1707435612.586:364): pid=5677 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:12.586000 audit[5677]: USER_START pid=5677 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:12.643736 kernel: audit: type=1103 audit(1707435612.592:365): pid=5680 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:12.592000 audit[5680]: CRED_ACQ pid=5680 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:13.049186 sshd[5677]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:13.050000 audit[5677]: USER_END pid=5677 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:13.052978 systemd[1]: sshd@9-10.200.8.4:22-10.200.12.6:45012.service: Deactivated successfully. Feb 8 23:40:13.054030 systemd[1]: session-12.scope: Deactivated successfully. Feb 8 23:40:13.061006 systemd-logind[1392]: Session 12 logged out. Waiting for processes to exit. Feb 8 23:40:13.061925 systemd-logind[1392]: Removed session 12. Feb 8 23:40:13.050000 audit[5677]: CRED_DISP pid=5677 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:13.083908 kernel: audit: type=1106 audit(1707435613.050:366): pid=5677 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:13.084009 kernel: audit: type=1104 audit(1707435613.050:367): pid=5677 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:13.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.4:22-10.200.12.6:45012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:18.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.4:22-10.200.12.6:44510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:18.154454 systemd[1]: Started sshd@10-10.200.8.4:22-10.200.12.6:44510.service. Feb 8 23:40:18.159404 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:40:18.159477 kernel: audit: type=1130 audit(1707435618.153:369): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.4:22-10.200.12.6:44510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:18.780000 audit[5695]: USER_ACCT pid=5695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:18.782232 sshd[5695]: Accepted publickey for core from 10.200.12.6 port 44510 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:18.800803 kernel: audit: type=1101 audit(1707435618.780:370): pid=5695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:18.798000 audit[5695]: CRED_ACQ pid=5695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:18.800949 sshd[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:18.806650 systemd[1]: Started session-13.scope. Feb 8 23:40:18.807414 systemd-logind[1392]: New session 13 of user core. Feb 8 23:40:18.818888 kernel: audit: type=1103 audit(1707435618.798:371): pid=5695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:18.798000 audit[5695]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffccda58230 a2=3 a3=0 items=0 ppid=1 pid=5695 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:18.844118 kernel: audit: type=1006 audit(1707435618.798:372): pid=5695 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Feb 8 23:40:18.844211 kernel: audit: type=1300 audit(1707435618.798:372): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffccda58230 a2=3 a3=0 items=0 ppid=1 pid=5695 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:18.798000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:18.844690 kernel: audit: type=1327 audit(1707435618.798:372): proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:18.849688 kernel: audit: type=1105 audit(1707435618.811:373): pid=5695 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:18.811000 audit[5695]: USER_START pid=5695 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:18.818000 audit[5698]: CRED_ACQ pid=5698 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:18.881712 kernel: audit: type=1103 audit(1707435618.818:374): pid=5698 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:19.275344 sshd[5695]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:19.275000 audit[5695]: USER_END pid=5695 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:19.279074 systemd[1]: sshd@10-10.200.8.4:22-10.200.12.6:44510.service: Deactivated successfully. Feb 8 23:40:19.280103 systemd[1]: session-13.scope: Deactivated successfully. Feb 8 23:40:19.281790 systemd-logind[1392]: Session 13 logged out. Waiting for processes to exit. Feb 8 23:40:19.282866 systemd-logind[1392]: Removed session 13. Feb 8 23:40:19.275000 audit[5695]: CRED_DISP pid=5695 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:19.309517 kernel: audit: type=1106 audit(1707435619.275:375): pid=5695 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:19.309597 kernel: audit: type=1104 audit(1707435619.275:376): pid=5695 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:19.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.4:22-10.200.12.6:44510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:21.196384 systemd[1]: run-containerd-runc-k8s.io-893c70826461f65a9b909e05aa1fcd546fa3cf6e59d661751c1c1c04a2756ce9-runc.TKIFTy.mount: Deactivated successfully. Feb 8 23:40:21.242394 systemd[1]: run-containerd-runc-k8s.io-893c70826461f65a9b909e05aa1fcd546fa3cf6e59d661751c1c1c04a2756ce9-runc.Kqq131.mount: Deactivated successfully. Feb 8 23:40:24.384679 systemd[1]: Started sshd@11-10.200.8.4:22-10.200.12.6:44516.service. Feb 8 23:40:24.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.4:22-10.200.12.6:44516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:24.389790 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:40:24.389874 kernel: audit: type=1130 audit(1707435624.383:378): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.4:22-10.200.12.6:44516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:25.021000 audit[5747]: USER_ACCT pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:25.023233 sshd[5747]: Accepted publickey for core from 10.200.12.6 port 44516 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:25.037000 audit[5747]: CRED_ACQ pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:25.040082 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:25.045953 systemd[1]: Started session-14.scope. Feb 8 23:40:25.046515 systemd-logind[1392]: New session 14 of user core. Feb 8 23:40:25.055230 kernel: audit: type=1101 audit(1707435625.021:379): pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:25.055308 kernel: audit: type=1103 audit(1707435625.037:380): pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:25.066695 kernel: audit: type=1006 audit(1707435625.038:381): pid=5747 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Feb 8 23:40:25.066807 kernel: audit: type=1300 audit(1707435625.038:381): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe215635e0 a2=3 a3=0 items=0 ppid=1 pid=5747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:25.038000 audit[5747]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe215635e0 a2=3 a3=0 items=0 ppid=1 pid=5747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:25.038000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:25.082773 kernel: audit: type=1327 audit(1707435625.038:381): proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:25.055000 audit[5747]: USER_START pid=5747 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:25.103681 kernel: audit: type=1105 audit(1707435625.055:382): pid=5747 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:25.103770 kernel: audit: type=1103 audit(1707435625.057:383): pid=5750 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:25.057000 audit[5750]: CRED_ACQ pid=5750 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:25.241630 systemd[1]: run-containerd-runc-k8s.io-bdd391fc1d7faad6c3679ce331e0dfb2e9a82712ffae625e3980bc14ae27f290-runc.l0Kali.mount: Deactivated successfully. Feb 8 23:40:25.523221 sshd[5747]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:25.522000 audit[5747]: USER_END pid=5747 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:25.530331 systemd[1]: sshd@11-10.200.8.4:22-10.200.12.6:44516.service: Deactivated successfully. Feb 8 23:40:25.531206 systemd[1]: session-14.scope: Deactivated successfully. Feb 8 23:40:25.537414 systemd-logind[1392]: Session 14 logged out. Waiting for processes to exit. Feb 8 23:40:25.538408 systemd-logind[1392]: Removed session 14. Feb 8 23:40:25.527000 audit[5747]: CRED_DISP pid=5747 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:25.557336 kernel: audit: type=1106 audit(1707435625.522:384): pid=5747 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:25.557429 kernel: audit: type=1104 audit(1707435625.527:385): pid=5747 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:25.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.4:22-10.200.12.6:44516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:25.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.4:22-10.200.12.6:44522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:25.625373 systemd[1]: Started sshd@12-10.200.8.4:22-10.200.12.6:44522.service. Feb 8 23:40:26.236000 audit[5801]: USER_ACCT pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:26.238214 sshd[5801]: Accepted publickey for core from 10.200.12.6 port 44522 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:26.237000 audit[5801]: CRED_ACQ pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:26.237000 audit[5801]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed6cf18a0 a2=3 a3=0 items=0 ppid=1 pid=5801 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:26.237000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:26.240254 sshd[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:26.244811 systemd-logind[1392]: New session 15 of user core. Feb 8 23:40:26.245427 systemd[1]: Started session-15.scope. Feb 8 23:40:26.249000 audit[5801]: USER_START pid=5801 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:26.252000 audit[5804]: CRED_ACQ pid=5804 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:27.689853 sshd[5801]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:27.690000 audit[5801]: USER_END pid=5801 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:27.690000 audit[5801]: CRED_DISP pid=5801 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:27.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.4:22-10.200.12.6:44522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:27.693439 systemd[1]: sshd@12-10.200.8.4:22-10.200.12.6:44522.service: Deactivated successfully. Feb 8 23:40:27.695283 systemd[1]: session-15.scope: Deactivated successfully. Feb 8 23:40:27.695375 systemd-logind[1392]: Session 15 logged out. Waiting for processes to exit. Feb 8 23:40:27.696724 systemd-logind[1392]: Removed session 15. Feb 8 23:40:27.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.4:22-10.200.12.6:45450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:27.794933 systemd[1]: Started sshd@13-10.200.8.4:22-10.200.12.6:45450.service. Feb 8 23:40:28.409000 audit[5812]: USER_ACCT pid=5812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:28.410989 sshd[5812]: Accepted publickey for core from 10.200.12.6 port 45450 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:28.410000 audit[5812]: CRED_ACQ pid=5812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:28.410000 audit[5812]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe74813ad0 a2=3 a3=0 items=0 ppid=1 pid=5812 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:28.410000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:28.412478 sshd[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:28.417355 systemd-logind[1392]: New session 16 of user core. Feb 8 23:40:28.418045 systemd[1]: Started session-16.scope. Feb 8 23:40:28.421000 audit[5812]: USER_START pid=5812 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:28.423000 audit[5815]: CRED_ACQ pid=5815 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:28.914955 sshd[5812]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:28.914000 audit[5812]: USER_END pid=5812 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:28.914000 audit[5812]: CRED_DISP pid=5812 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:28.918061 systemd[1]: sshd@13-10.200.8.4:22-10.200.12.6:45450.service: Deactivated successfully. Feb 8 23:40:28.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.4:22-10.200.12.6:45450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:28.920073 systemd[1]: session-16.scope: Deactivated successfully. Feb 8 23:40:28.920749 systemd-logind[1392]: Session 16 logged out. Waiting for processes to exit. Feb 8 23:40:28.922309 systemd-logind[1392]: Removed session 16. Feb 8 23:40:34.041075 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 8 23:40:34.044778 kernel: audit: type=1130 audit(1707435634.018:405): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.4:22-10.200.12.6:45454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:34.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.4:22-10.200.12.6:45454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:34.020031 systemd[1]: Started sshd@14-10.200.8.4:22-10.200.12.6:45454.service. Feb 8 23:40:34.639000 audit[5828]: USER_ACCT pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:34.659183 sshd[5828]: Accepted publickey for core from 10.200.12.6 port 45454 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:34.659566 sshd[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:34.659888 kernel: audit: type=1101 audit(1707435634.639:406): pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:34.670688 systemd[1]: Started session-17.scope. Feb 8 23:40:34.671553 systemd-logind[1392]: New session 17 of user core. Feb 8 23:40:34.657000 audit[5828]: CRED_ACQ pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:34.703797 kernel: audit: type=1103 audit(1707435634.657:407): pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:34.703901 kernel: audit: type=1006 audit(1707435634.657:408): pid=5828 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Feb 8 23:40:34.657000 audit[5828]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeacb62bd0 a2=3 a3=0 items=0 ppid=1 pid=5828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:34.719536 kernel: audit: type=1300 audit(1707435634.657:408): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeacb62bd0 a2=3 a3=0 items=0 ppid=1 pid=5828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:34.720270 kernel: audit: type=1327 audit(1707435634.657:408): proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:34.657000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:34.678000 audit[5828]: USER_START pid=5828 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:34.741797 kernel: audit: type=1105 audit(1707435634.678:409): pid=5828 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:34.680000 audit[5834]: CRED_ACQ pid=5834 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:34.742725 kernel: audit: type=1103 audit(1707435634.680:410): pid=5834 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:35.140066 sshd[5828]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:35.141000 audit[5828]: USER_END pid=5828 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:35.144907 systemd[1]: sshd@14-10.200.8.4:22-10.200.12.6:45454.service: Deactivated successfully. Feb 8 23:40:35.145842 systemd[1]: session-17.scope: Deactivated successfully. Feb 8 23:40:35.152226 systemd-logind[1392]: Session 17 logged out. Waiting for processes to exit. Feb 8 23:40:35.153234 systemd-logind[1392]: Removed session 17. Feb 8 23:40:35.141000 audit[5828]: CRED_DISP pid=5828 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:35.175461 kernel: audit: type=1106 audit(1707435635.141:411): pid=5828 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:35.175553 kernel: audit: type=1104 audit(1707435635.141:412): pid=5828 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:35.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.4:22-10.200.12.6:45454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:35.536209 systemd[1]: run-containerd-runc-k8s.io-a1dc31ac7ad67788dcc47efdfc725038082aa67b9f2b203c8fd82332a70da0ac-runc.IcWZyA.mount: Deactivated successfully. Feb 8 23:40:40.243284 systemd[1]: Started sshd@15-10.200.8.4:22-10.200.12.6:35832.service. Feb 8 23:40:40.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.4:22-10.200.12.6:35832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:40.247851 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:40:40.247950 kernel: audit: type=1130 audit(1707435640.242:414): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.4:22-10.200.12.6:35832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:40.854000 audit[5870]: USER_ACCT pid=5870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:40.856856 sshd[5870]: Accepted publickey for core from 10.200.12.6 port 35832 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:40.872000 audit[5870]: CRED_ACQ pid=5870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:40.874889 sshd[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:40.880638 systemd[1]: Started session-18.scope. Feb 8 23:40:40.881605 systemd-logind[1392]: New session 18 of user core. Feb 8 23:40:40.892692 kernel: audit: type=1101 audit(1707435640.854:415): pid=5870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:40.892768 kernel: audit: type=1103 audit(1707435640.872:416): pid=5870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:40.892797 kernel: audit: type=1006 audit(1707435640.872:417): pid=5870 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Feb 8 23:40:40.872000 audit[5870]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc17194170 a2=3 a3=0 items=0 ppid=1 pid=5870 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:40.899685 kernel: audit: type=1300 audit(1707435640.872:417): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc17194170 a2=3 a3=0 items=0 ppid=1 pid=5870 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:40.913728 kernel: audit: type=1327 audit(1707435640.872:417): proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:40.872000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:40.885000 audit[5870]: USER_START pid=5870 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:40.919686 kernel: audit: type=1105 audit(1707435640.885:418): pid=5870 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:40.887000 audit[5874]: CRED_ACQ pid=5874 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:40.948383 kernel: audit: type=1103 audit(1707435640.887:419): pid=5874 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:41.352270 sshd[5870]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:41.352000 audit[5870]: USER_END pid=5870 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:41.361175 systemd[1]: sshd@15-10.200.8.4:22-10.200.12.6:35832.service: Deactivated successfully. Feb 8 23:40:41.362411 systemd[1]: session-18.scope: Deactivated successfully. Feb 8 23:40:41.363684 systemd-logind[1392]: Session 18 logged out. Waiting for processes to exit. Feb 8 23:40:41.364581 systemd-logind[1392]: Removed session 18. Feb 8 23:40:41.352000 audit[5870]: CRED_DISP pid=5870 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:41.385319 kernel: audit: type=1106 audit(1707435641.352:420): pid=5870 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:41.385436 kernel: audit: type=1104 audit(1707435641.352:421): pid=5870 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:41.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.4:22-10.200.12.6:35832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:46.455685 systemd[1]: Started sshd@16-10.200.8.4:22-10.200.12.6:35848.service. Feb 8 23:40:46.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.4:22-10.200.12.6:35848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:46.461943 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:40:46.462020 kernel: audit: type=1130 audit(1707435646.454:423): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.4:22-10.200.12.6:35848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:47.090315 kernel: audit: type=1101 audit(1707435647.070:424): pid=5883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:47.070000 audit[5883]: USER_ACCT pid=5883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:47.089802 sshd[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:47.087000 audit[5883]: CRED_ACQ pid=5883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:47.091019 sshd[5883]: Accepted publickey for core from 10.200.12.6 port 35848 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:47.095536 systemd[1]: Started session-19.scope. Feb 8 23:40:47.096493 systemd-logind[1392]: New session 19 of user core. Feb 8 23:40:47.117608 kernel: audit: type=1103 audit(1707435647.087:425): pid=5883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:47.117712 kernel: audit: type=1006 audit(1707435647.087:426): pid=5883 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Feb 8 23:40:47.087000 audit[5883]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3b24d7a0 a2=3 a3=0 items=0 ppid=1 pid=5883 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:47.087000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:47.133686 kernel: audit: type=1300 audit(1707435647.087:426): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3b24d7a0 a2=3 a3=0 items=0 ppid=1 pid=5883 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:47.133732 kernel: audit: type=1327 audit(1707435647.087:426): proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:47.099000 audit[5883]: USER_START pid=5883 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:47.156219 kernel: audit: type=1105 audit(1707435647.099:427): pid=5883 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:47.156304 kernel: audit: type=1103 audit(1707435647.107:428): pid=5886 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:47.107000 audit[5886]: CRED_ACQ pid=5886 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:47.567742 sshd[5883]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:47.567000 audit[5883]: USER_END pid=5883 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:47.571305 systemd[1]: sshd@16-10.200.8.4:22-10.200.12.6:35848.service: Deactivated successfully. Feb 8 23:40:47.572382 systemd[1]: session-19.scope: Deactivated successfully. Feb 8 23:40:47.579650 systemd-logind[1392]: Session 19 logged out. Waiting for processes to exit. Feb 8 23:40:47.580621 systemd-logind[1392]: Removed session 19. Feb 8 23:40:47.568000 audit[5883]: CRED_DISP pid=5883 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:47.601182 kernel: audit: type=1106 audit(1707435647.567:429): pid=5883 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:47.601269 kernel: audit: type=1104 audit(1707435647.568:430): pid=5883 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:47.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.4:22-10.200.12.6:35848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:52.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.4:22-10.200.12.6:46654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:52.672124 systemd[1]: Started sshd@17-10.200.8.4:22-10.200.12.6:46654.service. Feb 8 23:40:52.677277 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:40:52.677347 kernel: audit: type=1130 audit(1707435652.671:432): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.4:22-10.200.12.6:46654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.288000 audit[5916]: USER_ACCT pid=5916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:53.289457 sshd[5916]: Accepted publickey for core from 10.200.12.6 port 46654 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:53.306000 audit[5916]: CRED_ACQ pid=5916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:53.306932 sshd[5916]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:53.312095 systemd[1]: Started session-20.scope. Feb 8 23:40:53.313191 systemd-logind[1392]: New session 20 of user core. Feb 8 23:40:53.323680 kernel: audit: type=1101 audit(1707435653.288:433): pid=5916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:53.323758 kernel: audit: type=1103 audit(1707435653.306:434): pid=5916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:53.340990 kernel: audit: type=1006 audit(1707435653.306:435): pid=5916 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Feb 8 23:40:53.341059 kernel: audit: type=1300 audit(1707435653.306:435): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe92129e60 a2=3 a3=0 items=0 ppid=1 pid=5916 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:53.306000 audit[5916]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe92129e60 a2=3 a3=0 items=0 ppid=1 pid=5916 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:53.306000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:53.351723 kernel: audit: type=1327 audit(1707435653.306:435): proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:53.316000 audit[5916]: USER_START pid=5916 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:53.373771 kernel: audit: type=1105 audit(1707435653.316:436): pid=5916 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:53.316000 audit[5919]: CRED_ACQ pid=5919 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:53.387958 kernel: audit: type=1103 audit(1707435653.316:437): pid=5919 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:53.782521 sshd[5916]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:53.783000 audit[5916]: USER_END pid=5916 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:53.786265 systemd-logind[1392]: Session 20 logged out. Waiting for processes to exit. Feb 8 23:40:53.787678 systemd[1]: sshd@17-10.200.8.4:22-10.200.12.6:46654.service: Deactivated successfully. Feb 8 23:40:53.788535 systemd[1]: session-20.scope: Deactivated successfully. Feb 8 23:40:53.790112 systemd-logind[1392]: Removed session 20. Feb 8 23:40:53.783000 audit[5916]: CRED_DISP pid=5916 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:53.817416 kernel: audit: type=1106 audit(1707435653.783:438): pid=5916 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:53.817497 kernel: audit: type=1104 audit(1707435653.783:439): pid=5916 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:53.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.4:22-10.200.12.6:46654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.4:22-10.200.12.6:46660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.884576 systemd[1]: Started sshd@18-10.200.8.4:22-10.200.12.6:46660.service. Feb 8 23:40:54.514000 audit[5929]: USER_ACCT pid=5929 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:54.515363 sshd[5929]: Accepted publickey for core from 10.200.12.6 port 46660 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:54.515000 audit[5929]: CRED_ACQ pid=5929 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:54.515000 audit[5929]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff7a79d280 a2=3 a3=0 items=0 ppid=1 pid=5929 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:54.515000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:54.516628 sshd[5929]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:54.521754 systemd[1]: Started session-21.scope. Feb 8 23:40:54.522479 systemd-logind[1392]: New session 21 of user core. Feb 8 23:40:54.527000 audit[5929]: USER_START pid=5929 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:54.529000 audit[5932]: CRED_ACQ pid=5932 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:55.064815 sshd[5929]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:55.065000 audit[5929]: USER_END pid=5929 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:55.066000 audit[5929]: CRED_DISP pid=5929 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:55.068248 systemd[1]: sshd@18-10.200.8.4:22-10.200.12.6:46660.service: Deactivated successfully. Feb 8 23:40:55.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.4:22-10.200.12.6:46660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:55.070129 systemd[1]: session-21.scope: Deactivated successfully. Feb 8 23:40:55.070796 systemd-logind[1392]: Session 21 logged out. Waiting for processes to exit. Feb 8 23:40:55.071768 systemd-logind[1392]: Removed session 21. Feb 8 23:40:55.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.4:22-10.200.12.6:46666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:55.167170 systemd[1]: Started sshd@19-10.200.8.4:22-10.200.12.6:46666.service. Feb 8 23:40:55.217152 systemd[1]: run-containerd-runc-k8s.io-041cb74cd331a285874b342702f283f2c5d177c7fbbeb085e3efa355d4ea2706-runc.qDggmH.mount: Deactivated successfully. Feb 8 23:40:55.245558 systemd[1]: run-containerd-runc-k8s.io-bdd391fc1d7faad6c3679ce331e0dfb2e9a82712ffae625e3980bc14ae27f290-runc.NLXZoe.mount: Deactivated successfully. Feb 8 23:40:55.782000 audit[5940]: USER_ACCT pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:55.783530 sshd[5940]: Accepted publickey for core from 10.200.12.6 port 46666 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:55.783000 audit[5940]: CRED_ACQ pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:55.784000 audit[5940]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc871f0f90 a2=3 a3=0 items=0 ppid=1 pid=5940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:55.784000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:55.785154 sshd[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:55.790220 systemd-logind[1392]: New session 22 of user core. Feb 8 23:40:55.790386 systemd[1]: Started session-22.scope. Feb 8 23:40:55.801000 audit[5940]: USER_START pid=5940 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:55.802000 audit[5982]: CRED_ACQ pid=5982 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:57.266079 sshd[5940]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:57.266000 audit[5940]: USER_END pid=5940 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:57.266000 audit[5940]: CRED_DISP pid=5940 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:57.269336 systemd[1]: sshd@19-10.200.8.4:22-10.200.12.6:46666.service: Deactivated successfully. Feb 8 23:40:57.270454 systemd[1]: session-22.scope: Deactivated successfully. Feb 8 23:40:57.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.4:22-10.200.12.6:46666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:57.271861 systemd-logind[1392]: Session 22 logged out. Waiting for processes to exit. Feb 8 23:40:57.273740 systemd-logind[1392]: Removed session 22. Feb 8 23:40:57.338000 audit[6021]: NETFILTER_CFG table=filter:146 family=2 entries=18 op=nft_register_rule pid=6021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:40:57.338000 audit[6021]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffccb39ac50 a2=0 a3=7ffccb39ac3c items=0 ppid=2808 pid=6021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:57.338000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:40:57.340000 audit[6021]: NETFILTER_CFG table=nat:147 family=2 entries=94 op=nft_register_rule pid=6021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:40:57.340000 audit[6021]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffccb39ac50 a2=0 a3=7ffccb39ac3c items=0 ppid=2808 pid=6021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:57.340000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:40:57.367623 systemd[1]: Started sshd@20-10.200.8.4:22-10.200.12.6:50138.service. Feb 8 23:40:57.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.4:22-10.200.12.6:50138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:57.386000 audit[6049]: NETFILTER_CFG table=filter:148 family=2 entries=30 op=nft_register_rule pid=6049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:40:57.386000 audit[6049]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7fffb65e42e0 a2=0 a3=7fffb65e42cc items=0 ppid=2808 pid=6049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:57.386000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:40:57.388000 audit[6049]: NETFILTER_CFG table=nat:149 family=2 entries=94 op=nft_register_rule pid=6049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:40:57.388000 audit[6049]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7fffb65e42e0 a2=0 a3=7fffb65e42cc items=0 ppid=2808 pid=6049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:57.388000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:40:57.986561 kernel: kauditd_printk_skb: 36 callbacks suppressed Feb 8 23:40:57.986734 kernel: audit: type=1101 audit(1707435657.980:464): pid=6042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:57.980000 audit[6042]: USER_ACCT pid=6042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:57.983295 sshd[6042]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:57.987163 sshd[6042]: Accepted publickey for core from 10.200.12.6 port 50138 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:57.992466 systemd[1]: Started session-23.scope. Feb 8 23:40:57.993594 systemd-logind[1392]: New session 23 of user core. Feb 8 23:40:57.982000 audit[6042]: CRED_ACQ pid=6042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:58.020716 kernel: audit: type=1103 audit(1707435657.982:465): pid=6042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:58.020796 kernel: audit: type=1006 audit(1707435657.982:466): pid=6042 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Feb 8 23:40:57.982000 audit[6042]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffcf4829f0 a2=3 a3=0 items=0 ppid=1 pid=6042 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:58.028775 kernel: audit: type=1300 audit(1707435657.982:466): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffcf4829f0 a2=3 a3=0 items=0 ppid=1 pid=6042 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:58.043935 kernel: audit: type=1327 audit(1707435657.982:466): proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:57.982000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:57.997000 audit[6042]: USER_START pid=6042 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:58.049720 kernel: audit: type=1105 audit(1707435657.997:467): pid=6042 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:58.005000 audit[6051]: CRED_ACQ pid=6051 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:58.065682 kernel: audit: type=1103 audit(1707435658.005:468): pid=6051 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:58.638000 audit[6042]: USER_END pid=6042 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:58.642044 systemd-logind[1392]: Session 23 logged out. Waiting for processes to exit. Feb 8 23:40:58.638302 sshd[6042]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:58.643375 systemd[1]: sshd@20-10.200.8.4:22-10.200.12.6:50138.service: Deactivated successfully. Feb 8 23:40:58.644390 systemd[1]: session-23.scope: Deactivated successfully. Feb 8 23:40:58.645490 systemd-logind[1392]: Removed session 23. Feb 8 23:40:58.639000 audit[6042]: CRED_DISP pid=6042 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:58.673605 kernel: audit: type=1106 audit(1707435658.638:469): pid=6042 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:58.673732 kernel: audit: type=1104 audit(1707435658.639:470): pid=6042 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:58.673760 kernel: audit: type=1131 audit(1707435658.643:471): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.4:22-10.200.12.6:50138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.4:22-10.200.12.6:50138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.742104 systemd[1]: Started sshd@21-10.200.8.4:22-10.200.12.6:50142.service. Feb 8 23:40:58.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.4:22-10.200.12.6:50142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:59.363000 audit[6058]: USER_ACCT pid=6058 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:59.364071 sshd[6058]: Accepted publickey for core from 10.200.12.6 port 50142 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:59.364000 audit[6058]: CRED_ACQ pid=6058 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:59.364000 audit[6058]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1fd5c730 a2=3 a3=0 items=0 ppid=1 pid=6058 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:59.364000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:40:59.365726 sshd[6058]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:59.370824 systemd-logind[1392]: New session 24 of user core. Feb 8 23:40:59.371824 systemd[1]: Started session-24.scope. Feb 8 23:40:59.376000 audit[6058]: USER_START pid=6058 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:59.378000 audit[6061]: CRED_ACQ pid=6061 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:59.880637 sshd[6058]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:59.881000 audit[6058]: USER_END pid=6058 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:59.881000 audit[6058]: CRED_DISP pid=6058 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:40:59.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.4:22-10.200.12.6:50142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:59.884106 systemd[1]: sshd@21-10.200.8.4:22-10.200.12.6:50142.service: Deactivated successfully. Feb 8 23:40:59.885566 systemd[1]: session-24.scope: Deactivated successfully. Feb 8 23:40:59.886260 systemd-logind[1392]: Session 24 logged out. Waiting for processes to exit. Feb 8 23:40:59.888139 systemd-logind[1392]: Removed session 24. Feb 8 23:41:03.763714 kernel: kauditd_printk_skb: 11 callbacks suppressed Feb 8 23:41:03.763856 kernel: audit: type=1325 audit(1707435663.756:481): table=filter:150 family=2 entries=18 op=nft_register_rule pid=6099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:41:03.756000 audit[6099]: NETFILTER_CFG table=filter:150 family=2 entries=18 op=nft_register_rule pid=6099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:41:03.756000 audit[6099]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff285d9530 a2=0 a3=7fff285d951c items=0 ppid=2808 pid=6099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:03.800689 kernel: audit: type=1300 audit(1707435663.756:481): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff285d9530 a2=0 a3=7fff285d951c items=0 ppid=2808 pid=6099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:03.756000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:41:03.810694 kernel: audit: type=1327 audit(1707435663.756:481): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:41:03.778000 audit[6099]: NETFILTER_CFG table=nat:151 family=2 entries=178 op=nft_register_chain pid=6099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:41:03.778000 audit[6099]: SYSCALL arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7fff285d9530 a2=0 a3=7fff285d951c items=0 ppid=2808 pid=6099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:03.838418 kernel: audit: type=1325 audit(1707435663.778:482): table=nat:151 family=2 entries=178 op=nft_register_chain pid=6099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 8 23:41:03.838526 kernel: audit: type=1300 audit(1707435663.778:482): arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7fff285d9530 a2=0 a3=7fff285d951c items=0 ppid=2808 pid=6099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:03.838554 kernel: audit: type=1327 audit(1707435663.778:482): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:41:03.778000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 8 23:41:04.995807 systemd[1]: Started sshd@22-10.200.8.4:22-10.200.12.6:50154.service. Feb 8 23:41:04.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.4:22-10.200.12.6:50154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:05.013734 kernel: audit: type=1130 audit(1707435664.995:483): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.4:22-10.200.12.6:50154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:05.617000 audit[6101]: USER_ACCT pid=6101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:05.618532 sshd[6101]: Accepted publickey for core from 10.200.12.6 port 50154 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:41:05.636221 kernel: audit: type=1101 audit(1707435665.617:484): pid=6101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:05.636033 sshd[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:41:05.634000 audit[6101]: CRED_ACQ pid=6101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:05.641794 systemd[1]: Started session-25.scope. Feb 8 23:41:05.642736 systemd-logind[1392]: New session 25 of user core. Feb 8 23:41:05.653728 kernel: audit: type=1103 audit(1707435665.634:485): pid=6101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:05.634000 audit[6101]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0e041890 a2=3 a3=0 items=0 ppid=1 pid=6101 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:05.634000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:41:05.646000 audit[6101]: USER_START pid=6101 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:05.663789 kernel: audit: type=1006 audit(1707435665.634:486): pid=6101 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Feb 8 23:41:05.646000 audit[6123]: CRED_ACQ pid=6123 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:06.109619 sshd[6101]: pam_unix(sshd:session): session closed for user core Feb 8 23:41:06.111000 audit[6101]: USER_END pid=6101 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:06.111000 audit[6101]: CRED_DISP pid=6101 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:06.113288 systemd[1]: sshd@22-10.200.8.4:22-10.200.12.6:50154.service: Deactivated successfully. Feb 8 23:41:06.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.4:22-10.200.12.6:50154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:06.114618 systemd[1]: session-25.scope: Deactivated successfully. Feb 8 23:41:06.115195 systemd-logind[1392]: Session 25 logged out. Waiting for processes to exit. Feb 8 23:41:06.116197 systemd-logind[1392]: Removed session 25. Feb 8 23:41:11.214763 systemd[1]: Started sshd@23-10.200.8.4:22-10.200.12.6:46246.service. Feb 8 23:41:11.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.4:22-10.200.12.6:46246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:11.220365 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 8 23:41:11.220435 kernel: audit: type=1130 audit(1707435671.214:492): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.4:22-10.200.12.6:46246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:11.833000 audit[6134]: USER_ACCT pid=6134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:11.852642 sshd[6134]: Accepted publickey for core from 10.200.12.6 port 46246 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:41:11.853037 kernel: audit: type=1101 audit(1707435671.833:493): pid=6134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:11.852702 sshd[6134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:41:11.851000 audit[6134]: CRED_ACQ pid=6134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:11.858134 systemd[1]: Started session-26.scope. Feb 8 23:41:11.859160 systemd-logind[1392]: New session 26 of user core. Feb 8 23:41:11.870693 kernel: audit: type=1103 audit(1707435671.851:494): pid=6134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:11.851000 audit[6134]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd4321a80 a2=3 a3=0 items=0 ppid=1 pid=6134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:11.880679 kernel: audit: type=1006 audit(1707435671.851:495): pid=6134 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Feb 8 23:41:11.880713 kernel: audit: type=1300 audit(1707435671.851:495): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd4321a80 a2=3 a3=0 items=0 ppid=1 pid=6134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:11.851000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:41:11.896738 kernel: audit: type=1327 audit(1707435671.851:495): proctitle=737368643A20636F7265205B707269765D Feb 8 23:41:11.858000 audit[6134]: USER_START pid=6134 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:11.917778 kernel: audit: type=1105 audit(1707435671.858:496): pid=6134 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:11.863000 audit[6137]: CRED_ACQ pid=6137 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:11.918686 kernel: audit: type=1103 audit(1707435671.863:497): pid=6137 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:12.324018 sshd[6134]: pam_unix(sshd:session): session closed for user core Feb 8 23:41:12.324000 audit[6134]: USER_END pid=6134 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:12.327982 systemd-logind[1392]: Session 26 logged out. Waiting for processes to exit. Feb 8 23:41:12.329743 systemd[1]: sshd@23-10.200.8.4:22-10.200.12.6:46246.service: Deactivated successfully. Feb 8 23:41:12.330759 systemd[1]: session-26.scope: Deactivated successfully. Feb 8 23:41:12.332724 systemd-logind[1392]: Removed session 26. Feb 8 23:41:12.344014 kernel: audit: type=1106 audit(1707435672.324:498): pid=6134 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:12.344093 kernel: audit: type=1104 audit(1707435672.325:499): pid=6134 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:12.325000 audit[6134]: CRED_DISP pid=6134 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:12.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.4:22-10.200.12.6:46246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:17.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.4:22-10.200.12.6:35438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:17.430349 systemd[1]: Started sshd@24-10.200.8.4:22-10.200.12.6:35438.service. Feb 8 23:41:17.435589 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:41:17.435686 kernel: audit: type=1130 audit(1707435677.430:501): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.4:22-10.200.12.6:35438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:18.054000 audit[6156]: USER_ACCT pid=6156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:18.055289 sshd[6156]: Accepted publickey for core from 10.200.12.6 port 35438 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:41:18.057138 sshd[6156]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:41:18.056000 audit[6156]: CRED_ACQ pid=6156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:18.078229 systemd-logind[1392]: New session 27 of user core. Feb 8 23:41:18.079147 systemd[1]: Started session-27.scope. Feb 8 23:41:18.089337 kernel: audit: type=1101 audit(1707435678.054:502): pid=6156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:18.089450 kernel: audit: type=1103 audit(1707435678.056:503): pid=6156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:18.056000 audit[6156]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe446d89d0 a2=3 a3=0 items=0 ppid=1 pid=6156 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:18.117644 kernel: audit: type=1006 audit(1707435678.056:504): pid=6156 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Feb 8 23:41:18.117749 kernel: audit: type=1300 audit(1707435678.056:504): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe446d89d0 a2=3 a3=0 items=0 ppid=1 pid=6156 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:18.117775 kernel: audit: type=1327 audit(1707435678.056:504): proctitle=737368643A20636F7265205B707269765D Feb 8 23:41:18.056000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:41:18.084000 audit[6156]: USER_START pid=6156 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:18.140011 kernel: audit: type=1105 audit(1707435678.084:505): pid=6156 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:18.143323 kernel: audit: type=1103 audit(1707435678.090:506): pid=6159 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:18.090000 audit[6159]: CRED_ACQ pid=6159 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:18.548611 sshd[6156]: pam_unix(sshd:session): session closed for user core Feb 8 23:41:18.549000 audit[6156]: USER_END pid=6156 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:18.552485 systemd[1]: sshd@24-10.200.8.4:22-10.200.12.6:35438.service: Deactivated successfully. Feb 8 23:41:18.553505 systemd[1]: session-27.scope: Deactivated successfully. Feb 8 23:41:18.559591 systemd-logind[1392]: Session 27 logged out. Waiting for processes to exit. Feb 8 23:41:18.560612 systemd-logind[1392]: Removed session 27. Feb 8 23:41:18.550000 audit[6156]: CRED_DISP pid=6156 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:18.582498 kernel: audit: type=1106 audit(1707435678.549:507): pid=6156 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:18.582573 kernel: audit: type=1104 audit(1707435678.550:508): pid=6156 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:18.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.4:22-10.200.12.6:35438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:21.199847 systemd[1]: run-containerd-runc-k8s.io-893c70826461f65a9b909e05aa1fcd546fa3cf6e59d661751c1c1c04a2756ce9-runc.rGzAQW.mount: Deactivated successfully. Feb 8 23:41:23.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.4:22-10.200.12.6:35450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:23.653314 systemd[1]: Started sshd@25-10.200.8.4:22-10.200.12.6:35450.service. Feb 8 23:41:23.658458 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:41:23.658558 kernel: audit: type=1130 audit(1707435683.652:510): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.4:22-10.200.12.6:35450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:24.273000 audit[6210]: USER_ACCT pid=6210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:24.274108 sshd[6210]: Accepted publickey for core from 10.200.12.6 port 35450 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:41:24.290684 kernel: audit: type=1101 audit(1707435684.273:511): pid=6210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:24.290772 kernel: audit: type=1103 audit(1707435684.289:512): pid=6210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:24.289000 audit[6210]: CRED_ACQ pid=6210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:24.291236 sshd[6210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:41:24.307089 systemd[1]: Started session-28.scope. Feb 8 23:41:24.308023 systemd-logind[1392]: New session 28 of user core. Feb 8 23:41:24.317383 kernel: audit: type=1006 audit(1707435684.290:513): pid=6210 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Feb 8 23:41:24.290000 audit[6210]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe45af9190 a2=3 a3=0 items=0 ppid=1 pid=6210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:24.335134 kernel: audit: type=1300 audit(1707435684.290:513): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe45af9190 a2=3 a3=0 items=0 ppid=1 pid=6210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:24.290000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:41:24.313000 audit[6210]: USER_START pid=6210 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:24.356620 kernel: audit: type=1327 audit(1707435684.290:513): proctitle=737368643A20636F7265205B707269765D Feb 8 23:41:24.356709 kernel: audit: type=1105 audit(1707435684.313:514): pid=6210 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:24.356757 kernel: audit: type=1103 audit(1707435684.318:515): pid=6213 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:24.318000 audit[6213]: CRED_ACQ pid=6213 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:24.772424 sshd[6210]: pam_unix(sshd:session): session closed for user core Feb 8 23:41:24.773000 audit[6210]: USER_END pid=6210 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:24.776233 systemd-logind[1392]: Session 28 logged out. Waiting for processes to exit. Feb 8 23:41:24.777732 systemd[1]: sshd@25-10.200.8.4:22-10.200.12.6:35450.service: Deactivated successfully. Feb 8 23:41:24.778784 systemd[1]: session-28.scope: Deactivated successfully. Feb 8 23:41:24.780278 systemd-logind[1392]: Removed session 28. Feb 8 23:41:24.773000 audit[6210]: CRED_DISP pid=6210 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:24.805364 kernel: audit: type=1106 audit(1707435684.773:516): pid=6210 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:24.805464 kernel: audit: type=1104 audit(1707435684.773:517): pid=6210 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:24.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.4:22-10.200.12.6:35450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:25.214466 systemd[1]: run-containerd-runc-k8s.io-041cb74cd331a285874b342702f283f2c5d177c7fbbeb085e3efa355d4ea2706-runc.OGRa3l.mount: Deactivated successfully. Feb 8 23:41:25.251892 systemd[1]: run-containerd-runc-k8s.io-bdd391fc1d7faad6c3679ce331e0dfb2e9a82712ffae625e3980bc14ae27f290-runc.9uzPjB.mount: Deactivated successfully. Feb 8 23:41:29.896536 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:41:29.896686 kernel: audit: type=1130 audit(1707435689.875:519): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.8.4:22-10.200.12.6:35974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:29.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.8.4:22-10.200.12.6:35974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:29.875402 systemd[1]: Started sshd@26-10.200.8.4:22-10.200.12.6:35974.service. Feb 8 23:41:30.496000 audit[6263]: USER_ACCT pid=6263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:30.497321 sshd[6263]: Accepted publickey for core from 10.200.12.6 port 35974 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:41:30.515687 kernel: audit: type=1101 audit(1707435690.496:520): pid=6263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:30.516213 sshd[6263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:41:30.514000 audit[6263]: CRED_ACQ pid=6263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:30.525358 systemd[1]: Started session-29.scope. Feb 8 23:41:30.526478 systemd-logind[1392]: New session 29 of user core. Feb 8 23:41:30.533693 kernel: audit: type=1103 audit(1707435690.514:521): pid=6263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:30.515000 audit[6263]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd7566e960 a2=3 a3=0 items=0 ppid=1 pid=6263 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:30.561058 kernel: audit: type=1006 audit(1707435690.515:522): pid=6263 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Feb 8 23:41:30.561136 kernel: audit: type=1300 audit(1707435690.515:522): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd7566e960 a2=3 a3=0 items=0 ppid=1 pid=6263 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:30.515000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:41:30.566543 kernel: audit: type=1327 audit(1707435690.515:522): proctitle=737368643A20636F7265205B707269765D Feb 8 23:41:30.526000 audit[6263]: USER_START pid=6263 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:30.533000 audit[6266]: CRED_ACQ pid=6266 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:30.598745 kernel: audit: type=1105 audit(1707435690.526:523): pid=6263 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:30.598822 kernel: audit: type=1103 audit(1707435690.533:524): pid=6266 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:30.994136 sshd[6263]: pam_unix(sshd:session): session closed for user core Feb 8 23:41:30.994000 audit[6263]: USER_END pid=6263 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:30.997313 systemd[1]: sshd@26-10.200.8.4:22-10.200.12.6:35974.service: Deactivated successfully. Feb 8 23:41:30.998215 systemd[1]: session-29.scope: Deactivated successfully. Feb 8 23:41:30.999859 systemd-logind[1392]: Session 29 logged out. Waiting for processes to exit. Feb 8 23:41:31.000800 systemd-logind[1392]: Removed session 29. Feb 8 23:41:30.994000 audit[6263]: CRED_DISP pid=6263 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:31.026689 kernel: audit: type=1106 audit(1707435690.994:525): pid=6263 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:31.026764 kernel: audit: type=1104 audit(1707435690.994:526): pid=6263 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:30.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.8.4:22-10.200.12.6:35974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:35.534964 systemd[1]: run-containerd-runc-k8s.io-a1dc31ac7ad67788dcc47efdfc725038082aa67b9f2b203c8fd82332a70da0ac-runc.3IadzK.mount: Deactivated successfully. Feb 8 23:41:36.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.8.4:22-10.200.12.6:35986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:36.102179 systemd[1]: Started sshd@27-10.200.8.4:22-10.200.12.6:35986.service. Feb 8 23:41:36.107322 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 8 23:41:36.107400 kernel: audit: type=1130 audit(1707435696.101:528): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.8.4:22-10.200.12.6:35986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:36.719000 audit[6301]: USER_ACCT pid=6301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:36.737988 sshd[6301]: Accepted publickey for core from 10.200.12.6 port 35986 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:41:36.738309 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:41:36.738718 kernel: audit: type=1101 audit(1707435696.719:529): pid=6301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:36.737000 audit[6301]: CRED_ACQ pid=6301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:36.743449 systemd[1]: Started session-30.scope. Feb 8 23:41:36.744459 systemd-logind[1392]: New session 30 of user core. Feb 8 23:41:36.759678 kernel: audit: type=1103 audit(1707435696.737:530): pid=6301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:36.759755 kernel: audit: type=1006 audit(1707435696.737:531): pid=6301 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Feb 8 23:41:36.737000 audit[6301]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3fd1cbc0 a2=3 a3=0 items=0 ppid=1 pid=6301 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:36.766715 kernel: audit: type=1300 audit(1707435696.737:531): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3fd1cbc0 a2=3 a3=0 items=0 ppid=1 pid=6301 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:36.737000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 8 23:41:36.786505 kernel: audit: type=1327 audit(1707435696.737:531): proctitle=737368643A20636F7265205B707269765D Feb 8 23:41:36.748000 audit[6301]: USER_START pid=6301 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:36.755000 audit[6304]: CRED_ACQ pid=6304 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:36.817220 kernel: audit: type=1105 audit(1707435696.748:532): pid=6301 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:36.817319 kernel: audit: type=1103 audit(1707435696.755:533): pid=6304 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:37.208903 sshd[6301]: pam_unix(sshd:session): session closed for user core Feb 8 23:41:37.209000 audit[6301]: USER_END pid=6301 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:37.216550 systemd[1]: sshd@27-10.200.8.4:22-10.200.12.6:35986.service: Deactivated successfully. Feb 8 23:41:37.217407 systemd[1]: session-30.scope: Deactivated successfully. Feb 8 23:41:37.219035 systemd-logind[1392]: Session 30 logged out. Waiting for processes to exit. Feb 8 23:41:37.219999 systemd-logind[1392]: Removed session 30. Feb 8 23:41:37.227690 kernel: audit: type=1106 audit(1707435697.209:534): pid=6301 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:37.227787 kernel: audit: type=1104 audit(1707435697.209:535): pid=6301 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:37.209000 audit[6301]: CRED_DISP pid=6301 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 8 23:41:37.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.8.4:22-10.200.12.6:35986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:50.365477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e3a7cf7ab276d40845cfc53aa216d890c4e98986a5e47a1a2372cf541e46fbd-rootfs.mount: Deactivated successfully. Feb 8 23:41:50.367421 env[1405]: time="2024-02-08T23:41:50.367248084Z" level=info msg="shim disconnected" id=5e3a7cf7ab276d40845cfc53aa216d890c4e98986a5e47a1a2372cf541e46fbd Feb 8 23:41:50.367873 env[1405]: time="2024-02-08T23:41:50.367417486Z" level=warning msg="cleaning up after shim disconnected" id=5e3a7cf7ab276d40845cfc53aa216d890c4e98986a5e47a1a2372cf541e46fbd namespace=k8s.io Feb 8 23:41:50.367873 env[1405]: time="2024-02-08T23:41:50.367457386Z" level=info msg="cleaning up dead shim" Feb 8 23:41:50.375571 env[1405]: time="2024-02-08T23:41:50.375537540Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:41:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6340 runtime=io.containerd.runc.v2\n" Feb 8 23:41:51.192734 kubelet[2651]: E0208 23:41:51.192477 2651 controller.go:189] failed to update lease, error: the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ci-3510.3.2-a-9933156126) Feb 8 23:41:51.200465 systemd[1]: run-containerd-runc-k8s.io-893c70826461f65a9b909e05aa1fcd546fa3cf6e59d661751c1c1c04a2756ce9-runc.7U8vCi.mount: Deactivated successfully. Feb 8 23:41:51.328837 kubelet[2651]: I0208 23:41:51.328806 2651 scope.go:115] "RemoveContainer" containerID="5e3a7cf7ab276d40845cfc53aa216d890c4e98986a5e47a1a2372cf541e46fbd" Feb 8 23:41:51.330549 env[1405]: time="2024-02-08T23:41:51.330511793Z" level=info msg="CreateContainer within sandbox \"743714ce218efbb121a41dd63d82baf55de3f1ba743e9ea827bab2544785a2fe\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Feb 8 23:41:51.365026 env[1405]: time="2024-02-08T23:41:51.364771721Z" level=info msg="CreateContainer within sandbox \"743714ce218efbb121a41dd63d82baf55de3f1ba743e9ea827bab2544785a2fe\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0a55a0ce8cedaac21eb37ea1e76d3a12538b6e1091a28b218c604f1656003589\"" Feb 8 23:41:51.365391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2613313536.mount: Deactivated successfully. Feb 8 23:41:51.365969 env[1405]: time="2024-02-08T23:41:51.365892228Z" level=info msg="StartContainer for \"0a55a0ce8cedaac21eb37ea1e76d3a12538b6e1091a28b218c604f1656003589\"" Feb 8 23:41:51.427406 env[1405]: time="2024-02-08T23:41:51.427362237Z" level=info msg="StartContainer for \"0a55a0ce8cedaac21eb37ea1e76d3a12538b6e1091a28b218c604f1656003589\" returns successfully" Feb 8 23:41:52.278768 env[1405]: time="2024-02-08T23:41:52.270972834Z" level=info msg="shim disconnected" id=6e097d50b94debd823370d0d5acd6340ede6623194246d2ceeb2fa6595e8306c Feb 8 23:41:52.278768 env[1405]: time="2024-02-08T23:41:52.271024935Z" level=warning msg="cleaning up after shim disconnected" id=6e097d50b94debd823370d0d5acd6340ede6623194246d2ceeb2fa6595e8306c namespace=k8s.io Feb 8 23:41:52.278768 env[1405]: time="2024-02-08T23:41:52.271036935Z" level=info msg="cleaning up dead shim" Feb 8 23:41:52.279687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e097d50b94debd823370d0d5acd6340ede6623194246d2ceeb2fa6595e8306c-rootfs.mount: Deactivated successfully. Feb 8 23:41:52.281983 env[1405]: time="2024-02-08T23:41:52.281942907Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:41:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6424 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:41:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Feb 8 23:41:52.334825 kubelet[2651]: I0208 23:41:52.334796 2651 scope.go:115] "RemoveContainer" containerID="6e097d50b94debd823370d0d5acd6340ede6623194246d2ceeb2fa6595e8306c" Feb 8 23:41:52.336936 env[1405]: time="2024-02-08T23:41:52.336901671Z" level=info msg="CreateContainer within sandbox \"d65f4075920fa1b3fd0e25fe9f7f255a2c8bf93baa5674d0f4300790f5c77052\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 8 23:41:52.365130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3140515535.mount: Deactivated successfully. Feb 8 23:41:52.373212 env[1405]: time="2024-02-08T23:41:52.373175011Z" level=info msg="CreateContainer within sandbox \"d65f4075920fa1b3fd0e25fe9f7f255a2c8bf93baa5674d0f4300790f5c77052\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"fa722089ee7a90c7dde726896f250bb93002a4f443c57dbee3ecd1ed789c2515\"" Feb 8 23:41:52.373574 env[1405]: time="2024-02-08T23:41:52.373548114Z" level=info msg="StartContainer for \"fa722089ee7a90c7dde726896f250bb93002a4f443c57dbee3ecd1ed789c2515\"" Feb 8 23:41:52.454692 env[1405]: time="2024-02-08T23:41:52.451275129Z" level=info msg="StartContainer for \"fa722089ee7a90c7dde726896f250bb93002a4f443c57dbee3ecd1ed789c2515\" returns successfully" Feb 8 23:41:55.889772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f26bead6eddc59cb3a687051d8045b8fea4f8905a161723ae31ef08e15933976-rootfs.mount: Deactivated successfully. Feb 8 23:41:55.891251 env[1405]: time="2024-02-08T23:41:55.891209213Z" level=info msg="shim disconnected" id=f26bead6eddc59cb3a687051d8045b8fea4f8905a161723ae31ef08e15933976 Feb 8 23:41:55.891651 env[1405]: time="2024-02-08T23:41:55.891254714Z" level=warning msg="cleaning up after shim disconnected" id=f26bead6eddc59cb3a687051d8045b8fea4f8905a161723ae31ef08e15933976 namespace=k8s.io Feb 8 23:41:55.891651 env[1405]: time="2024-02-08T23:41:55.891266614Z" level=info msg="cleaning up dead shim" Feb 8 23:41:55.898574 env[1405]: time="2024-02-08T23:41:55.898547662Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:41:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6525 runtime=io.containerd.runc.v2\n" Feb 8 23:41:56.346922 kubelet[2651]: I0208 23:41:56.346885 2651 scope.go:115] "RemoveContainer" containerID="f26bead6eddc59cb3a687051d8045b8fea4f8905a161723ae31ef08e15933976" Feb 8 23:41:56.349390 env[1405]: time="2024-02-08T23:41:56.349347418Z" level=info msg="CreateContainer within sandbox \"25c58fb7f23c29974576e73b34a2c8a9c043a963bc2a4a949e4915cccdba610b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 8 23:41:56.385352 env[1405]: time="2024-02-08T23:41:56.385307654Z" level=info msg="CreateContainer within sandbox \"25c58fb7f23c29974576e73b34a2c8a9c043a963bc2a4a949e4915cccdba610b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"215cfb55fef596cb54f30975b3019a9a92155bea0f1ce2189eab8ecb93660a9f\"" Feb 8 23:41:56.385824 env[1405]: time="2024-02-08T23:41:56.385792257Z" level=info msg="StartContainer for \"215cfb55fef596cb54f30975b3019a9a92155bea0f1ce2189eab8ecb93660a9f\"" Feb 8 23:41:56.461410 env[1405]: time="2024-02-08T23:41:56.461370252Z" level=info msg="StartContainer for \"215cfb55fef596cb54f30975b3019a9a92155bea0f1ce2189eab8ecb93660a9f\" returns successfully" Feb 8 23:41:57.759630 kubelet[2651]: E0208 23:41:57.759349 2651 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.4:37480->10.200.8.23:2379: read: connection timed out Feb 8 23:42:01.341936 kubelet[2651]: E0208 23:42:01.341814 2651 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-9933156126.17b207b99d4ed085", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-9933156126", UID:"bcfa72e7cf8f4ed43bfe2ef57b11e5f6", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-9933156126"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 42, 516371589, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 42, 516371589, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.4:37294->10.200.8.23:2379: read: connection timed out' (will not retry!) Feb 8 23:42:07.762155 kubelet[2651]: E0208 23:42:07.759569 2651 controller.go:189] failed to update lease, error: Put "https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-9933156126?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)