Dec 13 14:28:18.014959 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:28:18.014981 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:28:18.014993 kernel: BIOS-provided physical RAM map: Dec 13 14:28:18.014998 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 14:28:18.015004 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 14:28:18.015011 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 14:28:18.015021 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Dec 13 14:28:18.015027 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 14:28:18.015035 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 14:28:18.015042 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 14:28:18.015048 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 14:28:18.015056 kernel: printk: bootconsole [earlyser0] enabled Dec 13 14:28:18.015062 kernel: NX (Execute Disable) protection: active Dec 13 14:28:18.015068 kernel: efi: EFI v2.70 by Microsoft Dec 13 14:28:18.015079 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Dec 13 14:28:18.015088 kernel: random: crng init done Dec 13 14:28:18.015095 kernel: SMBIOS 3.1.0 present. Dec 13 14:28:18.015103 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 14:28:18.015112 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 14:28:18.015119 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 14:28:18.015128 kernel: Hyper-V Host Build:20348-10.0-1-0.1633 Dec 13 14:28:18.015134 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 14:28:18.015145 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 14:28:18.015154 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 14:28:18.015160 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 14:28:18.015167 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 14:28:18.015176 kernel: tsc: Detected 2593.907 MHz processor Dec 13 14:28:18.015184 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:28:18.015190 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:28:18.015199 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 14:28:18.015206 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:28:18.015212 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 14:28:18.015224 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 14:28:18.015231 kernel: Using GB pages for direct mapping Dec 13 14:28:18.015240 kernel: Secure boot disabled Dec 13 14:28:18.015248 kernel: ACPI: Early table checksum verification disabled Dec 13 14:28:18.015258 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 14:28:18.015266 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:18.015276 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:18.015286 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 14:28:18.015304 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 14:28:18.015311 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:18.015321 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:18.015329 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:18.015338 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:18.015346 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:18.015354 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:18.015361 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:18.015368 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 14:28:18.015374 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 14:28:18.015381 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 14:28:18.015388 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 14:28:18.015394 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 14:28:18.015401 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 14:28:18.015410 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 14:28:18.015420 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 14:28:18.015427 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 14:28:18.015433 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 14:28:18.015443 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:28:18.015451 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:28:18.015461 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 14:28:18.015469 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 14:28:18.015478 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 14:28:18.015490 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 14:28:18.015498 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 14:28:18.015505 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 14:28:18.015514 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 14:28:18.015523 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 14:28:18.015531 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 14:28:18.015541 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 14:28:18.015550 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 14:28:18.015560 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 14:28:18.015573 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 14:28:18.015583 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 14:28:18.015595 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 14:28:18.015607 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 14:28:18.015619 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 14:28:18.015629 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 14:28:18.026262 kernel: Zone ranges: Dec 13 14:28:18.026280 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:28:18.026291 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 14:28:18.026307 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 14:28:18.026318 kernel: Movable zone start for each node Dec 13 14:28:18.026329 kernel: Early memory node ranges Dec 13 14:28:18.026341 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 14:28:18.026353 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 14:28:18.026366 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 14:28:18.026376 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 14:28:18.026389 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 14:28:18.026400 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:28:18.026414 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 14:28:18.026426 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 14:28:18.026438 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 14:28:18.026450 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 14:28:18.026460 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:28:18.026472 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:28:18.026483 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:28:18.026494 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 14:28:18.026504 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:28:18.026516 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 14:28:18.026525 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 14:28:18.026535 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:28:18.026543 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:28:18.026553 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:28:18.026560 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:28:18.026568 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:28:18.026576 kernel: Hyper-V: PV spinlocks enabled Dec 13 14:28:18.026585 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:28:18.026596 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 14:28:18.026604 kernel: Policy zone: Normal Dec 13 14:28:18.026614 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:28:18.026624 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:28:18.026631 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 14:28:18.026669 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:28:18.026676 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:28:18.026686 kernel: Memory: 8071664K/8387460K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 315536K reserved, 0K cma-reserved) Dec 13 14:28:18.026699 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:28:18.026707 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:28:18.026724 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:28:18.026736 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:28:18.026745 kernel: rcu: RCU event tracing is enabled. Dec 13 14:28:18.026753 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:28:18.026763 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:28:18.026772 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:28:18.026781 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:28:18.026788 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:28:18.026798 kernel: Using NULL legacy PIC Dec 13 14:28:18.026811 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 14:28:18.026819 kernel: Console: colour dummy device 80x25 Dec 13 14:28:18.026827 kernel: printk: console [tty1] enabled Dec 13 14:28:18.026836 kernel: printk: console [ttyS0] enabled Dec 13 14:28:18.026846 kernel: printk: bootconsole [earlyser0] disabled Dec 13 14:28:18.026856 kernel: ACPI: Core revision 20210730 Dec 13 14:28:18.026865 kernel: Failed to register legacy timer interrupt Dec 13 14:28:18.026874 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:28:18.026884 kernel: Hyper-V: Using IPI hypercalls Dec 13 14:28:18.026891 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Dec 13 14:28:18.026900 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 14:28:18.026909 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 14:28:18.026919 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:28:18.026927 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:28:18.026935 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:28:18.026946 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:28:18.026957 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 14:28:18.026965 kernel: RETBleed: Vulnerable Dec 13 14:28:18.026973 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:28:18.026982 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:28:18.026991 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:28:18.026999 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 14:28:18.027006 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:28:18.027017 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:28:18.027026 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:28:18.027037 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 14:28:18.027044 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 14:28:18.027054 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 14:28:18.027063 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:28:18.027071 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 14:28:18.027079 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 14:28:18.027089 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 14:28:18.027097 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 14:28:18.027107 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:28:18.027114 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:28:18.027124 kernel: LSM: Security Framework initializing Dec 13 14:28:18.027131 kernel: SELinux: Initializing. Dec 13 14:28:18.027143 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:28:18.027150 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:28:18.027161 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 14:28:18.027169 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 14:28:18.027179 kernel: signal: max sigframe size: 3632 Dec 13 14:28:18.027186 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:28:18.027197 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:28:18.027204 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:28:18.027215 kernel: x86: Booting SMP configuration: Dec 13 14:28:18.027222 kernel: .... node #0, CPUs: #1 Dec 13 14:28:18.027233 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 14:28:18.027242 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:28:18.027253 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:28:18.027261 kernel: smpboot: Max logical packages: 1 Dec 13 14:28:18.027269 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Dec 13 14:28:18.027279 kernel: devtmpfs: initialized Dec 13 14:28:18.027288 kernel: x86/mm: Memory block size: 128MB Dec 13 14:28:18.027296 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 14:28:18.027308 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:28:18.027316 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:28:18.027326 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:28:18.027334 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:28:18.027342 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:28:18.027351 kernel: audit: type=2000 audit(1734100097.023:1): state=initialized audit_enabled=0 res=1 Dec 13 14:28:18.027361 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:28:18.027369 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:28:18.027377 kernel: cpuidle: using governor menu Dec 13 14:28:18.027389 kernel: ACPI: bus type PCI registered Dec 13 14:28:18.027399 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:28:18.027407 kernel: dca service started, version 1.12.1 Dec 13 14:28:18.027414 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:28:18.027424 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:28:18.027434 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:28:18.027442 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:28:18.027449 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:28:18.027460 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:28:18.027471 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:28:18.027480 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:28:18.027487 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:28:18.027497 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:28:18.027506 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:28:18.027515 kernel: ACPI: Interpreter enabled Dec 13 14:28:18.027522 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:28:18.027531 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:28:18.027540 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:28:18.027552 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 14:28:18.027560 kernel: iommu: Default domain type: Translated Dec 13 14:28:18.027568 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:28:18.027577 kernel: vgaarb: loaded Dec 13 14:28:18.027588 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:28:18.027596 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Dec 13 14:28:18.027604 kernel: PTP clock support registered Dec 13 14:28:18.027614 kernel: Registered efivars operations Dec 13 14:28:18.027623 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:28:18.027631 kernel: PCI: System does not support PCI Dec 13 14:28:18.027649 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 14:28:18.027659 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:28:18.027668 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:28:18.027675 kernel: pnp: PnP ACPI init Dec 13 14:28:18.027685 kernel: pnp: PnP ACPI: found 3 devices Dec 13 14:28:18.027695 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:28:18.027703 kernel: NET: Registered PF_INET protocol family Dec 13 14:28:18.027710 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:28:18.027722 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 14:28:18.027731 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:28:18.027740 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:28:18.027747 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 14:28:18.027756 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 14:28:18.027765 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:28:18.027776 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:28:18.027783 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:28:18.027791 kernel: NET: Registered PF_XDP protocol family Dec 13 14:28:18.027802 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:28:18.027813 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 14:28:18.027820 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Dec 13 14:28:18.027829 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:28:18.027839 kernel: Initialise system trusted keyrings Dec 13 14:28:18.027849 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 14:28:18.027857 kernel: Key type asymmetric registered Dec 13 14:28:18.027865 kernel: Asymmetric key parser 'x509' registered Dec 13 14:28:18.027874 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:28:18.027886 kernel: io scheduler mq-deadline registered Dec 13 14:28:18.027893 kernel: io scheduler kyber registered Dec 13 14:28:18.027902 kernel: io scheduler bfq registered Dec 13 14:28:18.027911 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:28:18.027921 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:28:18.027929 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:28:18.027937 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 14:28:18.027946 kernel: i8042: PNP: No PS/2 controller found. Dec 13 14:28:18.028071 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 14:28:18.028149 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T14:28:17 UTC (1734100097) Dec 13 14:28:18.028231 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 14:28:18.028243 kernel: fail to initialize ptp_kvm Dec 13 14:28:18.028252 kernel: intel_pstate: CPU model not supported Dec 13 14:28:18.028262 kernel: efifb: probing for efifb Dec 13 14:28:18.028270 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 14:28:18.028280 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 14:28:18.028287 kernel: efifb: scrolling: redraw Dec 13 14:28:18.028300 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 14:28:18.028307 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:28:18.028318 kernel: fb0: EFI VGA frame buffer device Dec 13 14:28:18.028325 kernel: pstore: Registered efi as persistent store backend Dec 13 14:28:18.028335 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:28:18.028343 kernel: Segment Routing with IPv6 Dec 13 14:28:18.028352 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:28:18.028360 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:28:18.028371 kernel: Key type dns_resolver registered Dec 13 14:28:18.028380 kernel: IPI shorthand broadcast: enabled Dec 13 14:28:18.028389 kernel: sched_clock: Marking stable (842605200, 24525700)->(1049553100, -182422200) Dec 13 14:28:18.028397 kernel: registered taskstats version 1 Dec 13 14:28:18.028408 kernel: Loading compiled-in X.509 certificates Dec 13 14:28:18.028415 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:28:18.028424 kernel: Key type .fscrypt registered Dec 13 14:28:18.028432 kernel: Key type fscrypt-provisioning registered Dec 13 14:28:18.028442 kernel: pstore: Using crash dump compression: deflate Dec 13 14:28:18.028452 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:28:18.028461 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:28:18.028469 kernel: ima: No architecture policies found Dec 13 14:28:18.028480 kernel: clk: Disabling unused clocks Dec 13 14:28:18.028487 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:28:18.028496 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:28:18.028505 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:28:18.028516 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:28:18.028523 kernel: Run /init as init process Dec 13 14:28:18.028530 kernel: with arguments: Dec 13 14:28:18.028539 kernel: /init Dec 13 14:28:18.028549 kernel: with environment: Dec 13 14:28:18.028557 kernel: HOME=/ Dec 13 14:28:18.028567 kernel: TERM=linux Dec 13 14:28:18.028574 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:28:18.028585 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:28:18.028595 systemd[1]: Detected virtualization microsoft. Dec 13 14:28:18.028608 systemd[1]: Detected architecture x86-64. Dec 13 14:28:18.028615 systemd[1]: Running in initrd. Dec 13 14:28:18.028626 systemd[1]: No hostname configured, using default hostname. Dec 13 14:28:18.028642 systemd[1]: Hostname set to <localhost>. Dec 13 14:28:18.028651 systemd[1]: Initializing machine ID from random generator. Dec 13 14:28:18.028662 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:28:18.028671 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:28:18.028680 systemd[1]: Reached target cryptsetup.target. Dec 13 14:28:18.028688 systemd[1]: Reached target paths.target. Dec 13 14:28:18.028700 systemd[1]: Reached target slices.target. Dec 13 14:28:18.028710 systemd[1]: Reached target swap.target. Dec 13 14:28:18.028719 systemd[1]: Reached target timers.target. Dec 13 14:28:18.028726 systemd[1]: Listening on iscsid.socket. Dec 13 14:28:18.028737 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:28:18.028746 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:28:18.028756 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:28:18.028766 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:28:18.028777 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:28:18.028786 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:28:18.028795 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:28:18.028803 systemd[1]: Reached target sockets.target. Dec 13 14:28:18.028813 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:28:18.028821 systemd[1]: Finished network-cleanup.service. Dec 13 14:28:18.028832 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:28:18.028840 systemd[1]: Starting systemd-journald.service... Dec 13 14:28:18.028852 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:28:18.028861 systemd[1]: Starting systemd-resolved.service... Dec 13 14:28:18.028872 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:28:18.028879 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:28:18.028889 kernel: audit: type=1130 audit(1734100098.014:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.028898 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:28:18.028915 systemd-journald[183]: Journal started Dec 13 14:28:18.028961 systemd-journald[183]: Runtime Journal (/run/log/journal/14bc88457a7b4881904733bc28f00a51) is 8.0M, max 159.0M, 151.0M free. Dec 13 14:28:18.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.036660 systemd[1]: Started systemd-journald.service. Dec 13 14:28:18.036691 kernel: audit: type=1130 audit(1734100098.031:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.048740 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:28:18.051832 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:28:18.054959 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 14:28:18.061301 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:28:18.074543 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:28:18.082844 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:28:18.103732 kernel: audit: type=1130 audit(1734100098.047:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.097189 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:28:18.108044 systemd-resolved[185]: Positive Trust Anchors: Dec 13 14:28:18.108051 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:28:18.108082 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:28:18.141478 kernel: audit: type=1130 audit(1734100098.050:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.141558 dracut-cmdline[200]: dracut-dracut-053 Dec 13 14:28:18.141558 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:28:18.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.113423 systemd-resolved[185]: Defaulting to hostname 'linux'. Dec 13 14:28:18.189080 kernel: audit: type=1130 audit(1734100098.076:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.189122 kernel: audit: type=1130 audit(1734100098.096:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.169877 systemd[1]: Started systemd-resolved.service. Dec 13 14:28:18.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.205938 systemd[1]: Reached target nss-lookup.target. Dec 13 14:28:18.209947 kernel: audit: type=1130 audit(1734100098.193:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.220653 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:28:18.230410 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 14:28:18.232615 kernel: Bridge firewalling registered Dec 13 14:28:18.260655 kernel: SCSI subsystem initialized Dec 13 14:28:18.286440 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:28:18.286519 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:28:18.291896 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:28:18.291930 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:28:18.298171 systemd-modules-load[184]: Inserted module 'dm_multipath' Dec 13 14:28:18.298973 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:28:18.306147 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:28:18.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.322216 kernel: audit: type=1130 audit(1734100098.302:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.333277 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:28:18.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.349657 kernel: audit: type=1130 audit(1734100098.334:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.353695 kernel: iscsi: registered transport (tcp) Dec 13 14:28:18.380693 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:28:18.380744 kernel: QLogic iSCSI HBA Driver Dec 13 14:28:18.409317 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:28:18.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.412862 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:28:18.464661 kernel: raid6: avx512x4 gen() 18421 MB/s Dec 13 14:28:18.484654 kernel: raid6: avx512x4 xor() 8371 MB/s Dec 13 14:28:18.504650 kernel: raid6: avx512x2 gen() 18235 MB/s Dec 13 14:28:18.525656 kernel: raid6: avx512x2 xor() 30012 MB/s Dec 13 14:28:18.545650 kernel: raid6: avx512x1 gen() 18370 MB/s Dec 13 14:28:18.565650 kernel: raid6: avx512x1 xor() 26567 MB/s Dec 13 14:28:18.585651 kernel: raid6: avx2x4 gen() 18306 MB/s Dec 13 14:28:18.605650 kernel: raid6: avx2x4 xor() 7902 MB/s Dec 13 14:28:18.625646 kernel: raid6: avx2x2 gen() 18307 MB/s Dec 13 14:28:18.646654 kernel: raid6: avx2x2 xor() 22319 MB/s Dec 13 14:28:18.666648 kernel: raid6: avx2x1 gen() 13829 MB/s Dec 13 14:28:18.686650 kernel: raid6: avx2x1 xor() 18850 MB/s Dec 13 14:28:18.706651 kernel: raid6: sse2x4 gen() 11727 MB/s Dec 13 14:28:18.726647 kernel: raid6: sse2x4 xor() 7245 MB/s Dec 13 14:28:18.745650 kernel: raid6: sse2x2 gen() 12958 MB/s Dec 13 14:28:18.765651 kernel: raid6: sse2x2 xor() 7441 MB/s Dec 13 14:28:18.785649 kernel: raid6: sse2x1 gen() 11588 MB/s Dec 13 14:28:18.808448 kernel: raid6: sse2x1 xor() 5945 MB/s Dec 13 14:28:18.808474 kernel: raid6: using algorithm avx512x4 gen() 18421 MB/s Dec 13 14:28:18.808486 kernel: raid6: .... xor() 8371 MB/s, rmw enabled Dec 13 14:28:18.811796 kernel: raid6: using avx512x2 recovery algorithm Dec 13 14:28:18.830659 kernel: xor: automatically using best checksumming function avx Dec 13 14:28:18.926663 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:28:18.934690 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:28:18.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.938000 audit: BPF prog-id=7 op=LOAD Dec 13 14:28:18.938000 audit: BPF prog-id=8 op=LOAD Dec 13 14:28:18.939838 systemd[1]: Starting systemd-udevd.service... Dec 13 14:28:18.954277 systemd-udevd[383]: Using default interface naming scheme 'v252'. Dec 13 14:28:18.958858 systemd[1]: Started systemd-udevd.service. Dec 13 14:28:18.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.967139 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:28:18.982991 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Dec 13 14:28:19.013137 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:28:19.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:19.016177 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:28:19.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:19.049243 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:28:19.092653 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:28:19.123657 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:28:19.128658 kernel: AES CTR mode by8 optimization enabled Dec 13 14:28:19.133654 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 14:28:19.142655 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 14:28:19.162133 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 14:28:19.172229 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:28:19.172270 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 14:28:19.178656 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 14:28:19.183668 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 14:28:19.183704 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 14:28:19.189681 kernel: scsi host0: storvsc_host_t Dec 13 14:28:19.200538 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 14:28:19.200711 kernel: scsi host1: storvsc_host_t Dec 13 14:28:19.200738 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 14:28:19.214904 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 14:28:19.238153 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 14:28:19.249576 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:28:19.249597 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 14:28:19.267077 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 14:28:19.267213 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 14:28:19.267318 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 14:28:19.267414 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 14:28:19.267508 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 14:28:19.267605 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:28:19.267621 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 14:28:19.326034 kernel: hv_netvsc 7c1e5235-fcae-7c1e-5235-fcae7c1e5235 eth0: VF slot 1 added Dec 13 14:28:19.340305 kernel: hv_vmbus: registering driver hv_pci Dec 13 14:28:19.340340 kernel: hv_pci 693a7527-f078-4104-9b27-a1ef734d4663: PCI VMBus probing: Using version 0x10004 Dec 13 14:28:19.414766 kernel: hv_pci 693a7527-f078-4104-9b27-a1ef734d4663: PCI host bridge to bus f078:00 Dec 13 14:28:19.414941 kernel: pci_bus f078:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 14:28:19.415122 kernel: pci_bus f078:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 14:28:19.415265 kernel: pci f078:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 14:28:19.415443 kernel: pci f078:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 14:28:19.415604 kernel: pci f078:00:02.0: enabling Extended Tags Dec 13 14:28:19.415782 kernel: pci f078:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f078:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 14:28:19.415934 kernel: pci_bus f078:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 14:28:19.416075 kernel: pci f078:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 14:28:19.508664 kernel: mlx5_core f078:00:02.0: firmware version: 14.30.5000 Dec 13 14:28:19.767339 kernel: mlx5_core f078:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 14:28:19.767519 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (436) Dec 13 14:28:19.767539 kernel: mlx5_core f078:00:02.0: Supported tc offload range - chains: 1, prios: 1 Dec 13 14:28:19.767697 kernel: mlx5_core f078:00:02.0: mlx5e_tc_post_act_init:40:(pid 7): firmware level support is missing Dec 13 14:28:19.767841 kernel: hv_netvsc 7c1e5235-fcae-7c1e-5235-fcae7c1e5235 eth0: VF registering: eth1 Dec 13 14:28:19.767940 kernel: mlx5_core f078:00:02.0 eth1: joined to eth0 Dec 13 14:28:19.686556 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:28:19.742075 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:28:19.779657 kernel: mlx5_core f078:00:02.0 enP61560s1: renamed from eth1 Dec 13 14:28:19.895771 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:28:19.902270 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:28:19.908696 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:28:19.912138 systemd[1]: Starting disk-uuid.service... Dec 13 14:28:19.925653 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:28:19.932656 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:28:20.940659 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:28:20.941042 disk-uuid[565]: The operation has completed successfully. Dec 13 14:28:21.011246 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:28:21.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:21.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:21.011348 systemd[1]: Finished disk-uuid.service. Dec 13 14:28:21.023465 systemd[1]: Starting verity-setup.service... Dec 13 14:28:21.062866 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:28:21.316343 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:28:21.320698 systemd[1]: Finished verity-setup.service. Dec 13 14:28:21.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:21.325034 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:28:21.412495 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:28:21.417524 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:28:21.414382 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:28:21.415169 systemd[1]: Starting ignition-setup.service... Dec 13 14:28:21.420791 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:28:21.450888 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:28:21.450933 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:28:21.450957 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:28:21.494691 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:28:21.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:21.500000 audit: BPF prog-id=9 op=LOAD Dec 13 14:28:21.501999 systemd[1]: Starting systemd-networkd.service... Dec 13 14:28:21.519183 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:28:21.531768 systemd-networkd[807]: lo: Link UP Dec 13 14:28:21.531777 systemd-networkd[807]: lo: Gained carrier Dec 13 14:28:21.536122 systemd-networkd[807]: Enumeration completed Dec 13 14:28:21.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:21.536220 systemd[1]: Started systemd-networkd.service. Dec 13 14:28:21.540301 systemd[1]: Reached target network.target. Dec 13 14:28:21.541787 systemd-networkd[807]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:28:21.550073 systemd[1]: Starting iscsiuio.service... Dec 13 14:28:21.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:21.557906 systemd[1]: Started iscsiuio.service. Dec 13 14:28:21.562383 systemd[1]: Starting iscsid.service... Dec 13 14:28:21.569196 iscsid[816]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:28:21.569196 iscsid[816]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier]. Dec 13 14:28:21.569196 iscsid[816]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:28:21.569196 iscsid[816]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:28:21.569196 iscsid[816]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:28:21.569196 iscsid[816]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:28:21.595794 systemd[1]: Started iscsid.service. Dec 13 14:28:21.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:21.600013 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:28:21.610785 kernel: mlx5_core f078:00:02.0 enP61560s1: Link up Dec 13 14:28:21.613795 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:28:21.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:21.619380 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:28:21.623182 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:28:21.625377 systemd[1]: Reached target remote-fs.target. Dec 13 14:28:21.628154 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:28:21.639225 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:28:21.650846 kernel: hv_netvsc 7c1e5235-fcae-7c1e-5235-fcae7c1e5235 eth0: Data path switched to VF: enP61560s1 Dec 13 14:28:21.651103 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:28:21.647529 systemd-networkd[807]: enP61560s1: Link UP Dec 13 14:28:21.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:21.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:21.647681 systemd-networkd[807]: eth0: Link UP Dec 13 14:28:21.651892 systemd-networkd[807]: eth0: Gained carrier Dec 13 14:28:21.654800 systemd[1]: Finished ignition-setup.service. Dec 13 14:28:21.658386 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:28:21.667939 systemd-networkd[807]: enP61560s1: Gained carrier Dec 13 14:28:21.697708 systemd-networkd[807]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:28:23.294885 systemd-networkd[807]: eth0: Gained IPv6LL Dec 13 14:28:24.758409 ignition[831]: Ignition 2.14.0 Dec 13 14:28:24.758426 ignition[831]: Stage: fetch-offline Dec 13 14:28:24.758525 ignition[831]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:24.758579 ignition[831]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:28:24.866827 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:28:24.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:24.868332 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:28:24.891032 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 13 14:28:24.891063 kernel: audit: type=1130 audit(1734100104.871:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:24.867043 ignition[831]: parsed url from cmdline: "" Dec 13 14:28:24.873877 systemd[1]: Starting ignition-fetch.service... Dec 13 14:28:24.867048 ignition[831]: no config URL provided Dec 13 14:28:24.867053 ignition[831]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:28:24.867063 ignition[831]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:28:24.867069 ignition[831]: failed to fetch config: resource requires networking Dec 13 14:28:24.867417 ignition[831]: Ignition finished successfully Dec 13 14:28:24.882205 ignition[837]: Ignition 2.14.0 Dec 13 14:28:24.882211 ignition[837]: Stage: fetch Dec 13 14:28:24.882314 ignition[837]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:24.882338 ignition[837]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:28:24.888085 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:28:24.888394 ignition[837]: parsed url from cmdline: "" Dec 13 14:28:24.888399 ignition[837]: no config URL provided Dec 13 14:28:24.888407 ignition[837]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:28:24.888422 ignition[837]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:28:24.888477 ignition[837]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 14:28:24.984584 ignition[837]: GET result: OK Dec 13 14:28:24.984751 ignition[837]: config has been read from IMDS userdata Dec 13 14:28:24.984794 ignition[837]: parsing config with SHA512: 2fbb98282e45f479c8e8ff4fd7597c682173cbd90a4bc041fcc4674b497962df1f3078ba2223cf6d1635de7c998ea2a7226dbf5b3b74fd7b270e4077a53b760c Dec 13 14:28:24.991648 unknown[837]: fetched base config from "system" Dec 13 14:28:24.991663 unknown[837]: fetched base config from "system" Dec 13 14:28:24.991671 unknown[837]: fetched user config from "azure" Dec 13 14:28:24.997946 ignition[837]: fetch: fetch complete Dec 13 14:28:24.997956 ignition[837]: fetch: fetch passed Dec 13 14:28:24.998009 ignition[837]: Ignition finished successfully Dec 13 14:28:25.004140 systemd[1]: Finished ignition-fetch.service. Dec 13 14:28:25.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:25.017703 systemd[1]: Starting ignition-kargs.service... Dec 13 14:28:25.023348 kernel: audit: type=1130 audit(1734100105.005:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:25.029994 ignition[843]: Ignition 2.14.0 Dec 13 14:28:25.030003 ignition[843]: Stage: kargs Dec 13 14:28:25.030134 ignition[843]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:25.030166 ignition[843]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:28:25.034834 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:28:25.036579 ignition[843]: kargs: kargs passed Dec 13 14:28:25.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:25.037468 systemd[1]: Finished ignition-kargs.service. Dec 13 14:28:25.057587 kernel: audit: type=1130 audit(1734100105.041:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:25.036626 ignition[843]: Ignition finished successfully Dec 13 14:28:25.053417 systemd[1]: Starting ignition-disks.service... Dec 13 14:28:25.063073 ignition[849]: Ignition 2.14.0 Dec 13 14:28:25.063084 ignition[849]: Stage: disks Dec 13 14:28:25.063209 ignition[849]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:25.063234 ignition[849]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:28:25.065981 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:28:25.068753 ignition[849]: disks: disks passed Dec 13 14:28:25.068799 ignition[849]: Ignition finished successfully Dec 13 14:28:25.075973 systemd[1]: Finished ignition-disks.service. Dec 13 14:28:25.102396 kernel: audit: type=1130 audit(1734100105.077:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:25.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:25.078009 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:28:25.090499 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:28:25.090909 systemd[1]: Reached target local-fs.target. Dec 13 14:28:25.091313 systemd[1]: Reached target sysinit.target. Dec 13 14:28:25.091730 systemd[1]: Reached target basic.target. Dec 13 14:28:25.093027 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:28:25.153907 systemd-fsck[857]: ROOT: clean, 621/7326000 files, 481077/7359488 blocks Dec 13 14:28:25.157513 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:28:25.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:25.162835 systemd[1]: Mounting sysroot.mount... Dec 13 14:28:25.177817 kernel: audit: type=1130 audit(1734100105.161:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:25.188657 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:28:25.190375 systemd[1]: Mounted sysroot.mount. Dec 13 14:28:25.196239 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:28:25.230805 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:28:25.234468 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 14:28:25.241273 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:28:25.241298 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:28:25.252809 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:28:25.305123 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:28:25.311012 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:28:25.319156 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (868) Dec 13 14:28:25.329698 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:28:25.329736 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:28:25.329752 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:28:25.333092 initrd-setup-root[873]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:28:25.340811 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:28:25.353574 initrd-setup-root[899]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:28:25.374163 initrd-setup-root[907]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:28:25.381050 initrd-setup-root[915]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:28:25.828372 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:28:25.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:25.831590 systemd[1]: Starting ignition-mount.service... Dec 13 14:28:25.848416 kernel: audit: type=1130 audit(1734100105.830:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:25.849250 systemd[1]: Starting sysroot-boot.service... Dec 13 14:28:25.855534 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:28:25.855672 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:28:25.876544 systemd[1]: Finished sysroot-boot.service. Dec 13 14:28:25.893144 kernel: audit: type=1130 audit(1734100105.877:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:25.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:25.895363 ignition[936]: INFO : Ignition 2.14.0 Dec 13 14:28:25.895363 ignition[936]: INFO : Stage: mount Dec 13 14:28:25.899330 ignition[936]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:25.899330 ignition[936]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:28:25.910084 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:28:25.910084 ignition[936]: INFO : mount: mount passed Dec 13 14:28:25.910084 ignition[936]: INFO : Ignition finished successfully Dec 13 14:28:25.928436 kernel: audit: type=1130 audit(1734100105.909:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:25.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:25.907837 systemd[1]: Finished ignition-mount.service. Dec 13 14:28:26.701703 coreos-metadata[867]: Dec 13 14:28:26.701 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 14:28:26.731260 coreos-metadata[867]: Dec 13 14:28:26.731 INFO Fetch successful Dec 13 14:28:26.766544 coreos-metadata[867]: Dec 13 14:28:26.766 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 14:28:26.783405 coreos-metadata[867]: Dec 13 14:28:26.783 INFO Fetch successful Dec 13 14:28:26.818090 coreos-metadata[867]: Dec 13 14:28:26.817 INFO wrote hostname ci-3510.3.6-a-e445ccd8ad to /sysroot/etc/hostname Dec 13 14:28:26.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.820311 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 14:28:26.842463 kernel: audit: type=1130 audit(1734100106.824:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.825680 systemd[1]: Starting ignition-files.service... Dec 13 14:28:26.845653 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:28:26.855664 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (946) Dec 13 14:28:26.860647 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:28:26.860682 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:28:26.867832 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:28:26.872081 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:28:26.885890 ignition[965]: INFO : Ignition 2.14.0 Dec 13 14:28:26.885890 ignition[965]: INFO : Stage: files Dec 13 14:28:26.890326 ignition[965]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:26.890326 ignition[965]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:28:26.899533 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:28:26.916056 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:28:26.919397 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:28:26.919397 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:28:26.951102 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:28:26.954733 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:28:26.958180 unknown[965]: wrote ssh authorized keys file for user: core Dec 13 14:28:26.960533 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:28:26.977779 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:28:26.982094 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:28:26.982094 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:28:26.990832 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:28:27.240551 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 14:28:27.370617 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:28:27.375518 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:28:27.379930 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:28:27.384160 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:28:27.388526 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:28:27.392711 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:28:27.397088 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:28:27.397088 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:28:27.397088 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:28:27.397088 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:28:27.397088 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:28:27.397088 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:28:27.397088 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:28:27.397088 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:28:27.397088 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:28:27.454878 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2444544068" Dec 13 14:28:27.454878 ignition[965]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2444544068": device or resource busy Dec 13 14:28:27.454878 ignition[965]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2444544068", trying btrfs: device or resource busy Dec 13 14:28:27.454878 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2444544068" Dec 13 14:28:27.476514 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (967) Dec 13 14:28:27.476544 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2444544068" Dec 13 14:28:27.476544 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem2444544068" Dec 13 14:28:27.476544 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem2444544068" Dec 13 14:28:27.476544 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:28:27.476544 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:28:27.476544 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:28:27.513663 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem630364148" Dec 13 14:28:27.513663 ignition[965]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem630364148": device or resource busy Dec 13 14:28:27.513663 ignition[965]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem630364148", trying btrfs: device or resource busy Dec 13 14:28:27.513663 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem630364148" Dec 13 14:28:27.513663 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem630364148" Dec 13 14:28:27.513663 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem630364148" Dec 13 14:28:27.542680 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem630364148" Dec 13 14:28:27.542680 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:28:27.542680 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:28:27.542680 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:28:27.555387 systemd[1]: mnt-oem630364148.mount: Deactivated successfully. Dec 13 14:28:27.926232 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Dec 13 14:28:28.337479 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:28:28.337479 ignition[965]: INFO : files: op(14): [started] processing unit "nvidia.service" Dec 13 14:28:28.337479 ignition[965]: INFO : files: op(14): [finished] processing unit "nvidia.service" Dec 13 14:28:28.370018 kernel: audit: type=1130 audit(1734100108.345:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.370133 ignition[965]: INFO : files: op(15): [started] processing unit "waagent.service" Dec 13 14:28:28.370133 ignition[965]: INFO : files: op(15): [finished] processing unit "waagent.service" Dec 13 14:28:28.370133 ignition[965]: INFO : files: op(16): [started] processing unit "containerd.service" Dec 13 14:28:28.370133 ignition[965]: INFO : files: op(16): op(17): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:28:28.370133 ignition[965]: INFO : files: op(16): op(17): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:28:28.370133 ignition[965]: INFO : files: op(16): [finished] processing unit "containerd.service" Dec 13 14:28:28.370133 ignition[965]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Dec 13 14:28:28.370133 ignition[965]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:28:28.370133 ignition[965]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:28:28.370133 ignition[965]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Dec 13 14:28:28.370133 ignition[965]: INFO : files: op(1a): [started] setting preset to enabled for "nvidia.service" Dec 13 14:28:28.370133 ignition[965]: INFO : files: op(1a): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:28:28.370133 ignition[965]: INFO : files: op(1b): [started] setting preset to enabled for "waagent.service" Dec 13 14:28:28.370133 ignition[965]: INFO : files: op(1b): [finished] setting preset to enabled for "waagent.service" Dec 13 14:28:28.370133 ignition[965]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:28:28.370133 ignition[965]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:28:28.370133 ignition[965]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:28:28.370133 ignition[965]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:28:28.370133 ignition[965]: INFO : files: files passed Dec 13 14:28:28.370133 ignition[965]: INFO : Ignition finished successfully Dec 13 14:28:28.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.343738 systemd[1]: Finished ignition-files.service. Dec 13 14:28:28.348996 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:28:28.457419 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:28:28.360520 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:28:28.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.362849 systemd[1]: Starting ignition-quench.service... Dec 13 14:28:28.367883 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:28:28.367978 systemd[1]: Finished ignition-quench.service. Dec 13 14:28:28.375365 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:28:28.380754 systemd[1]: Reached target ignition-complete.target. Dec 13 14:28:28.450095 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:28:28.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.465369 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:28:28.465464 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:28:28.469682 systemd[1]: Reached target initrd-fs.target. Dec 13 14:28:28.471576 systemd[1]: Reached target initrd.target. Dec 13 14:28:28.473557 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:28:28.474333 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:28:28.488029 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:28:28.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.492100 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:28:28.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.506557 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:28:28.506658 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:28:28.510814 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:28:28.513930 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:28:28.516140 systemd[1]: Stopped target timers.target. Dec 13 14:28:28.518078 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:28:28.518124 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:28:28.522062 systemd[1]: Stopped target initrd.target. Dec 13 14:28:28.526539 systemd[1]: Stopped target basic.target. Dec 13 14:28:28.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.528630 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:28:28.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.532619 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:28:28.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.534801 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:28:28.538870 systemd[1]: Stopped target remote-fs.target. Dec 13 14:28:28.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.541582 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:28:28.546330 systemd[1]: Stopped target sysinit.target. Dec 13 14:28:28.550565 systemd[1]: Stopped target local-fs.target. Dec 13 14:28:28.607944 ignition[1003]: INFO : Ignition 2.14.0 Dec 13 14:28:28.607944 ignition[1003]: INFO : Stage: umount Dec 13 14:28:28.607944 ignition[1003]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:28.607944 ignition[1003]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:28:28.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.555067 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:28:28.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.628874 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:28:28.628874 ignition[1003]: INFO : umount: umount passed Dec 13 14:28:28.628874 ignition[1003]: INFO : Ignition finished successfully Dec 13 14:28:28.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.557434 systemd[1]: Stopped target swap.target. Dec 13 14:28:28.561449 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:28:28.561522 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:28:28.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.565741 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:28:28.571356 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:28:28.571420 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:28:28.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.573549 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:28:28.573590 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:28:28.577914 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:28:28.577968 systemd[1]: Stopped ignition-files.service. Dec 13 14:28:28.579998 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 14:28:28.580052 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 14:28:28.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.581824 systemd[1]: Stopping ignition-mount.service... Dec 13 14:28:28.691000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:28:28.582726 systemd[1]: Stopping iscsiuio.service... Dec 13 14:28:28.583537 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:28:28.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.583693 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:28:28.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.583752 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:28:28.584131 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:28:28.584173 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:28:28.599854 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:28:28.599965 systemd[1]: Stopped iscsiuio.service. Dec 13 14:28:28.613359 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:28:28.613443 systemd[1]: Stopped ignition-mount.service. Dec 13 14:28:28.616999 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:28:28.617042 systemd[1]: Stopped ignition-disks.service. Dec 13 14:28:28.624824 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:28:28.624876 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:28:28.628856 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:28:28.628922 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:28:28.634455 systemd[1]: Stopped target network.target. Dec 13 14:28:28.638191 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:28:28.638246 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:28:28.642764 systemd[1]: Stopped target paths.target. Dec 13 14:28:28.644526 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:28:28.648679 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:28:28.651236 systemd[1]: Stopped target slices.target. Dec 13 14:28:28.652119 systemd[1]: Stopped target sockets.target. Dec 13 14:28:28.652541 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:28:28.652571 systemd[1]: Closed iscsid.socket. Dec 13 14:28:28.652964 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:28:28.652998 systemd[1]: Closed iscsiuio.socket. Dec 13 14:28:28.653381 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:28:28.653417 systemd[1]: Stopped ignition-setup.service. Dec 13 14:28:28.654082 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:28:28.654340 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:28:28.669319 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:28:28.669417 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:28:28.669728 systemd-networkd[807]: eth0: DHCPv6 lease lost Dec 13 14:28:28.723000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:28:28.676494 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:28:28.676600 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:28:28.692983 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:28:28.693401 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:28:28.693436 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:28:28.696790 systemd[1]: Stopping network-cleanup.service... Dec 13 14:28:28.699722 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:28:28.699785 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:28:28.703964 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:28:28.704014 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:28:28.708721 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:28:28.708776 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:28:28.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.796776 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:28:28.799786 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:28:28.805093 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:28:28.807677 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:28:28.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.811661 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:28:28.811710 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:28:28.818148 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:28:28.818192 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:28:28.824526 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:28:28.824583 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:28:28.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.830592 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:28:28.841263 kernel: hv_netvsc 7c1e5235-fcae-7c1e-5235-fcae7c1e5235 eth0: Data path switched from VF: enP61560s1 Dec 13 14:28:28.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.831653 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:28:28.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.841291 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:28:28.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.841343 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:28:28.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.845272 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:28:28.847236 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:28:28.847288 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:28:28.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.851910 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:28:28.851961 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:28:28.855954 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:28:28.856005 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:28:28.859736 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:28:28.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.873977 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:28:28.874086 systemd[1]: Stopped network-cleanup.service. Dec 13 14:28:28.887662 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:28:28.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.887757 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:28:29.158407 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:28:29.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:29.158531 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:28:29.163480 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:28:29.167614 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:28:29.170169 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:28:29.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:29.177582 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:28:29.212944 systemd[1]: Switching root. Dec 13 14:28:29.216000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:28:29.216000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:28:29.216000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:28:29.217000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:28:29.217000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:28:29.239159 iscsid[816]: iscsid shutting down. Dec 13 14:28:29.241070 systemd-journald[183]: Received SIGTERM from PID 1 (n/a). Dec 13 14:28:29.241140 systemd-journald[183]: Journal stopped Dec 13 14:28:42.486760 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:28:42.486799 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:28:42.486818 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:28:42.486834 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:28:42.486851 kernel: SELinux: policy capability open_perms=1 Dec 13 14:28:42.486866 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:28:42.486883 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:28:42.486904 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:28:42.486921 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:28:42.486939 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:28:42.486955 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:28:42.486970 kernel: kauditd_printk_skb: 48 callbacks suppressed Dec 13 14:28:42.486986 kernel: audit: type=1403 audit(1734100112.132:87): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:28:42.487005 systemd[1]: Successfully loaded SELinux policy in 258.912ms. Dec 13 14:28:42.487031 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.934ms. Dec 13 14:28:42.487050 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:28:42.487070 systemd[1]: Detected virtualization microsoft. Dec 13 14:28:42.487089 systemd[1]: Detected architecture x86-64. Dec 13 14:28:42.487105 systemd[1]: Detected first boot. Dec 13 14:28:42.487126 systemd[1]: Hostname set to <ci-3510.3.6-a-e445ccd8ad>. Dec 13 14:28:42.487143 systemd[1]: Initializing machine ID from random generator. Dec 13 14:28:42.487164 kernel: audit: type=1400 audit(1734100112.733:88): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:28:42.487183 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:28:42.487203 kernel: audit: type=1400 audit(1734100114.155:89): avc: denied { associate } for pid=1054 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:28:42.487218 kernel: audit: type=1300 audit(1734100114.155:89): arch=c000003e syscall=188 success=yes exit=0 a0=c00014f672 a1=c0000d0af8 a2=c0000d8a00 a3=32 items=0 ppid=1037 pid=1054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:28:42.487514 kernel: audit: type=1327 audit(1734100114.155:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:28:42.487526 kernel: audit: type=1400 audit(1734100114.163:90): avc: denied { associate } for pid=1054 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:28:42.487536 kernel: audit: type=1300 audit(1734100114.163:90): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014f749 a2=1ed a3=0 items=2 ppid=1037 pid=1054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:28:42.487548 kernel: audit: type=1307 audit(1734100114.163:90): cwd="/" Dec 13 14:28:42.487558 kernel: audit: type=1302 audit(1734100114.163:90): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:42.487568 kernel: audit: type=1302 audit(1734100114.163:90): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:42.487580 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:28:42.487593 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:28:42.487603 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:28:42.487613 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:28:42.487624 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:28:42.487633 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:28:42.487655 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:28:42.487667 systemd[1]: Created slice system-getty.slice. Dec 13 14:28:42.487677 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:28:42.487692 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:28:42.487702 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:28:42.487715 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:28:42.487725 systemd[1]: Created slice user.slice. Dec 13 14:28:42.487737 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:28:42.487747 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:28:42.487761 systemd[1]: Set up automount boot.automount. Dec 13 14:28:42.487774 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:28:42.487784 systemd[1]: Reached target integritysetup.target. Dec 13 14:28:42.487796 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:28:42.487806 systemd[1]: Reached target remote-fs.target. Dec 13 14:28:42.487818 systemd[1]: Reached target slices.target. Dec 13 14:28:42.487828 systemd[1]: Reached target swap.target. Dec 13 14:28:42.487841 systemd[1]: Reached target torcx.target. Dec 13 14:28:42.487854 systemd[1]: Reached target veritysetup.target. Dec 13 14:28:42.487865 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:28:42.487879 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:28:42.487891 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:28:42.487901 kernel: audit: type=1400 audit(1734100122.164:91): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:28:42.487913 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:28:42.487923 kernel: audit: type=1335 audit(1734100122.164:92): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:28:42.487934 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:28:42.487946 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:28:42.487958 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:28:42.487969 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:28:42.487981 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:28:42.487991 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:28:42.488005 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:28:42.488018 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:28:42.488028 systemd[1]: Mounting media.mount... Dec 13 14:28:42.488038 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:42.488051 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:28:42.488063 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:28:42.488074 systemd[1]: Mounting tmp.mount... Dec 13 14:28:42.488087 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:28:42.488099 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:28:42.488111 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:28:42.488125 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:28:42.488138 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:28:42.488148 systemd[1]: Starting modprobe@drm.service... Dec 13 14:28:42.488159 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:28:42.488170 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:28:42.488182 systemd[1]: Starting modprobe@loop.service... Dec 13 14:28:42.488192 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:28:42.488205 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:28:42.488220 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:28:42.488230 systemd[1]: Starting systemd-journald.service... Dec 13 14:28:42.488242 kernel: loop: module loaded Dec 13 14:28:42.488251 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:28:42.488263 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:28:42.488273 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:28:42.488283 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:28:42.488293 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:42.488304 kernel: fuse: init (API version 7.34) Dec 13 14:28:42.488313 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:28:42.488323 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:28:42.488333 systemd[1]: Mounted media.mount. Dec 13 14:28:42.488342 kernel: audit: type=1305 audit(1734100122.483:93): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:28:42.488356 systemd-journald[1171]: Journal started Dec 13 14:28:42.488404 systemd-journald[1171]: Runtime Journal (/run/log/journal/2d38181a8fbd49879943993d19721e73) is 8.0M, max 159.0M, 151.0M free. Dec 13 14:28:42.164000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:28:42.164000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:28:42.483000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:28:42.483000 audit[1171]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdd970a500 a2=4000 a3=7ffdd970a59c items=0 ppid=1 pid=1171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:28:42.524175 kernel: audit: type=1300 audit(1734100122.483:93): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdd970a500 a2=4000 a3=7ffdd970a59c items=0 ppid=1 pid=1171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:28:42.524225 systemd[1]: Started systemd-journald.service. Dec 13 14:28:42.524250 kernel: audit: type=1327 audit(1734100122.483:93): proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:28:42.483000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:28:42.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.532739 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:28:42.545217 kernel: audit: type=1130 audit(1734100122.531:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.546521 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:28:42.548674 systemd[1]: Mounted tmp.mount. Dec 13 14:28:42.550563 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:28:42.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.553226 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:28:42.566853 kernel: audit: type=1130 audit(1734100122.552:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.567445 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:28:42.567686 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:28:42.570397 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:28:42.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.572754 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:28:42.584663 kernel: audit: type=1130 audit(1734100122.566:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.597656 kernel: audit: type=1130 audit(1734100122.569:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.597631 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:28:42.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.611702 kernel: audit: type=1131 audit(1734100122.569:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.597836 systemd[1]: Finished modprobe@drm.service. Dec 13 14:28:42.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.614985 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:28:42.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.615268 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:28:42.619916 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:28:42.620107 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:28:42.622484 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:28:42.622737 systemd[1]: Finished modprobe@loop.service. Dec 13 14:28:42.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.626068 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:28:42.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.630783 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:28:42.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.633488 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:28:42.636182 systemd[1]: Reached target network-pre.target. Dec 13 14:28:42.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.640044 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:28:42.643467 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:28:42.648098 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:28:42.663735 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:28:42.667138 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:28:42.669479 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:28:42.670690 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:28:42.672812 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:28:42.674017 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:28:42.677214 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:28:42.685934 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:28:42.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.688622 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:28:42.691197 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:28:42.694400 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:28:42.697521 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:28:42.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.700146 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:28:42.711771 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:28:42.741470 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:28:42.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.765352 systemd-journald[1171]: Time spent on flushing to /var/log/journal/2d38181a8fbd49879943993d19721e73 is 24.941ms for 1101 entries. Dec 13 14:28:42.765352 systemd-journald[1171]: System Journal (/var/log/journal/2d38181a8fbd49879943993d19721e73) is 8.0M, max 2.6G, 2.6G free. Dec 13 14:28:42.837908 systemd-journald[1171]: Received client request to flush runtime journal. Dec 13 14:28:42.839066 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:28:42.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:43.173579 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:28:43.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:43.177935 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:28:43.461972 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:28:43.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:43.911925 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:28:43.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:43.916191 systemd[1]: Starting systemd-udevd.service... Dec 13 14:28:43.935724 systemd-udevd[1222]: Using default interface naming scheme 'v252'. Dec 13 14:28:44.096651 systemd[1]: Started systemd-udevd.service. Dec 13 14:28:44.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.101202 systemd[1]: Starting systemd-networkd.service... Dec 13 14:28:44.138000 systemd[1]: Found device dev-ttyS0.device. Dec 13 14:28:44.192944 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:28:44.224655 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:28:44.229655 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 14:28:44.214000 audit[1231]: AVC avc: denied { confidentiality } for pid=1231 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:28:44.235651 kernel: hv_vmbus: registering driver hv_balloon Dec 13 14:28:44.253681 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 14:28:44.253738 kernel: hv_vmbus: registering driver hv_utils Dec 13 14:28:44.263963 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 14:28:44.264017 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 14:28:44.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.264432 systemd[1]: Started systemd-userdbd.service. Dec 13 14:28:44.270585 kernel: Console: switching to colour dummy device 80x25 Dec 13 14:28:44.271655 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:28:44.297651 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 14:28:44.214000 audit[1231]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55fa30157740 a1=f884 a2=7fa22c138bc5 a3=5 items=12 ppid=1222 pid=1231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:28:44.214000 audit: CWD cwd="/" Dec 13 14:28:44.214000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:44.214000 audit: PATH item=1 name=(null) inode=15540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:44.214000 audit: PATH item=2 name=(null) inode=15540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:44.214000 audit: PATH item=3 name=(null) inode=15541 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:44.214000 audit: PATH item=4 name=(null) inode=15540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:44.214000 audit: PATH item=5 name=(null) inode=15542 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:44.214000 audit: PATH item=6 name=(null) inode=15540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:44.214000 audit: PATH item=7 name=(null) inode=15543 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:44.214000 audit: PATH item=8 name=(null) inode=15540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:44.214000 audit: PATH item=9 name=(null) inode=15544 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:44.214000 audit: PATH item=10 name=(null) inode=15540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:44.214000 audit: PATH item=11 name=(null) inode=15545 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:44.214000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:28:44.378069 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 14:28:44.378157 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 14:28:44.378183 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 14:28:44.976916 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1229) Dec 13 14:28:45.038236 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 14:28:45.082743 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Dec 13 14:28:45.149253 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:28:45.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:45.153441 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:28:45.188453 systemd-networkd[1228]: lo: Link UP Dec 13 14:28:45.188462 systemd-networkd[1228]: lo: Gained carrier Dec 13 14:28:45.189107 systemd-networkd[1228]: Enumeration completed Dec 13 14:28:45.189238 systemd[1]: Started systemd-networkd.service. Dec 13 14:28:45.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:45.193327 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:28:45.215671 systemd-networkd[1228]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:28:45.268753 kernel: mlx5_core f078:00:02.0 enP61560s1: Link up Dec 13 14:28:45.294742 kernel: hv_netvsc 7c1e5235-fcae-7c1e-5235-fcae7c1e5235 eth0: Data path switched to VF: enP61560s1 Dec 13 14:28:45.295064 systemd-networkd[1228]: enP61560s1: Link UP Dec 13 14:28:45.295258 systemd-networkd[1228]: eth0: Link UP Dec 13 14:28:45.295265 systemd-networkd[1228]: eth0: Gained carrier Dec 13 14:28:45.300059 systemd-networkd[1228]: enP61560s1: Gained carrier Dec 13 14:28:45.329840 systemd-networkd[1228]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:28:45.445285 lvm[1299]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:28:45.475884 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:28:45.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:45.478672 systemd[1]: Reached target cryptsetup.target. Dec 13 14:28:45.482323 systemd[1]: Starting lvm2-activation.service... Dec 13 14:28:45.487003 lvm[1302]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:28:45.505877 systemd[1]: Finished lvm2-activation.service. Dec 13 14:28:45.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:45.508106 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:28:45.510135 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:28:45.510173 systemd[1]: Reached target local-fs.target. Dec 13 14:28:45.512135 systemd[1]: Reached target machines.target. Dec 13 14:28:45.515707 systemd[1]: Starting ldconfig.service... Dec 13 14:28:45.518020 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:28:45.518107 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:28:45.519607 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:28:45.522670 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:28:45.526418 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:28:45.529775 systemd[1]: Starting systemd-sysext.service... Dec 13 14:28:45.551762 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1305 (bootctl) Dec 13 14:28:45.553120 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:28:45.726904 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:28:45.731474 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:28:45.731801 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:28:46.159746 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:28:46.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.186527 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:28:46.198806 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:28:46.211734 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:28:46.215652 (sd-sysext)[1321]: Using extensions 'kubernetes'. Dec 13 14:28:46.216077 (sd-sysext)[1321]: Merged extensions into '/usr'. Dec 13 14:28:46.234312 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:46.235865 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:28:46.238153 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:28:46.239897 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:28:46.243558 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:28:46.249322 systemd[1]: Starting modprobe@loop.service... Dec 13 14:28:46.254177 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:28:46.254411 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:28:46.254611 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:46.258336 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:28:46.260878 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:28:46.261066 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:28:46.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.264072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:28:46.264262 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:28:46.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.267173 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:28:46.267362 systemd[1]: Finished modprobe@loop.service. Dec 13 14:28:46.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.271225 systemd[1]: Finished systemd-sysext.service. Dec 13 14:28:46.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.276441 systemd[1]: Starting ensure-sysext.service... Dec 13 14:28:46.278471 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:28:46.278548 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:28:46.279851 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:28:46.288990 systemd[1]: Reloading. Dec 13 14:28:46.311607 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:28:46.325338 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:28:46.341141 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:28:46.351808 /usr/lib/systemd/system-generators/torcx-generator[1355]: time="2024-12-13T14:28:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:28:46.351848 /usr/lib/systemd/system-generators/torcx-generator[1355]: time="2024-12-13T14:28:46Z" level=info msg="torcx already run" Dec 13 14:28:46.468019 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:28:46.468041 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:28:46.486301 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:28:46.543250 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:28:46.554086 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:28:46.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.565984 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:46.566272 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:28:46.567607 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:28:46.571253 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:28:46.574838 systemd[1]: Starting modprobe@loop.service... Dec 13 14:28:46.576674 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:28:46.576910 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:28:46.577111 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:46.578729 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:28:46.578920 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:28:46.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.585104 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:28:46.585288 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:28:46.586561 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:28:46.586700 systemd[1]: Finished modprobe@loop.service. Dec 13 14:28:46.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.589015 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:28:46.589105 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:28:46.591629 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:46.594239 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:28:46.596265 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:28:46.598482 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:28:46.600891 systemd[1]: Starting modprobe@loop.service... Dec 13 14:28:46.602184 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:28:46.602462 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:28:46.602752 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:46.604527 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:28:46.604873 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:28:46.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.612467 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:28:46.612785 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:28:46.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.614555 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:28:46.614805 systemd[1]: Finished modprobe@loop.service. Dec 13 14:28:46.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.617786 systemd[1]: Finished ensure-sysext.service. Dec 13 14:28:46.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.619972 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:46.620452 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:28:46.625200 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:28:46.633593 systemd[1]: Starting modprobe@drm.service... Dec 13 14:28:46.634972 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:28:46.635140 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:28:46.635872 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:28:46.636055 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:46.636885 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:28:46.637196 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:28:46.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.638557 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:28:46.638794 systemd[1]: Finished modprobe@drm.service. Dec 13 14:28:46.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.639272 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:28:46.760737 systemd-fsck[1319]: fsck.fat 4.2 (2021-01-31) Dec 13 14:28:46.760737 systemd-fsck[1319]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 14:28:46.763354 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:28:46.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.768286 systemd[1]: Mounting boot.mount... Dec 13 14:28:46.786392 systemd[1]: Mounted boot.mount. Dec 13 14:28:46.800879 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:28:46.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:47.063538 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:28:47.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:47.068063 systemd[1]: Starting audit-rules.service... Dec 13 14:28:47.071688 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:28:47.077049 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:28:47.082208 systemd[1]: Starting systemd-resolved.service... Dec 13 14:28:47.086339 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:28:47.092337 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:28:47.096981 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:28:47.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:47.100000 audit[1461]: SYSTEM_BOOT pid=1461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:28:47.105604 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:28:47.109562 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:28:47.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:47.226250 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:28:47.228964 systemd[1]: Reached target time-set.target. Dec 13 14:28:47.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:47.282536 systemd-resolved[1458]: Positive Trust Anchors: Dec 13 14:28:47.282553 systemd-resolved[1458]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:28:47.282604 systemd-resolved[1458]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:28:47.299819 systemd-networkd[1228]: eth0: Gained IPv6LL Dec 13 14:28:47.301869 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:28:47.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:47.325480 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:28:47.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:47.394000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:28:47.394000 audit[1478]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe7257d750 a2=420 a3=0 items=0 ppid=1454 pid=1478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:28:47.394000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:28:47.396744 augenrules[1478]: No rules Dec 13 14:28:47.397201 systemd[1]: Finished audit-rules.service. Dec 13 14:28:47.421373 systemd-resolved[1458]: Using system hostname 'ci-3510.3.6-a-e445ccd8ad'. Dec 13 14:28:47.423027 systemd[1]: Started systemd-resolved.service. Dec 13 14:28:47.425094 systemd[1]: Reached target network.target. Dec 13 14:28:47.427147 systemd[1]: Reached target network-online.target. Dec 13 14:28:47.429110 systemd[1]: Reached target nss-lookup.target. Dec 13 14:28:47.474254 systemd-timesyncd[1459]: Contacted time server 89.234.64.77:123 (0.flatcar.pool.ntp.org). Dec 13 14:28:47.474325 systemd-timesyncd[1459]: Initial clock synchronization to Fri 2024-12-13 14:28:47.474072 UTC. Dec 13 14:28:52.810046 ldconfig[1304]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:28:52.817846 systemd[1]: Finished ldconfig.service. Dec 13 14:28:52.822143 systemd[1]: Starting systemd-update-done.service... Dec 13 14:28:52.842841 systemd[1]: Finished systemd-update-done.service. Dec 13 14:28:52.845164 systemd[1]: Reached target sysinit.target. Dec 13 14:28:52.847284 systemd[1]: Started motdgen.path. Dec 13 14:28:52.848984 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:28:52.851931 systemd[1]: Started logrotate.timer. Dec 13 14:28:52.853779 systemd[1]: Started mdadm.timer. Dec 13 14:28:52.855414 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:28:52.857798 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:28:52.857959 systemd[1]: Reached target paths.target. Dec 13 14:28:52.859911 systemd[1]: Reached target timers.target. Dec 13 14:28:52.864555 systemd[1]: Listening on dbus.socket. Dec 13 14:28:52.867769 systemd[1]: Starting docker.socket... Dec 13 14:28:52.922627 systemd[1]: Listening on sshd.socket. Dec 13 14:28:52.925506 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:28:52.926027 systemd[1]: Listening on docker.socket. Dec 13 14:28:52.927989 systemd[1]: Reached target sockets.target. Dec 13 14:28:52.930080 systemd[1]: Reached target basic.target. Dec 13 14:28:52.932231 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:28:52.932289 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:28:52.932320 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:28:52.933416 systemd[1]: Starting containerd.service... Dec 13 14:28:52.937007 systemd[1]: Starting dbus.service... Dec 13 14:28:52.940504 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:28:52.943807 systemd[1]: Starting extend-filesystems.service... Dec 13 14:28:52.946041 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:28:52.947667 systemd[1]: Starting kubelet.service... Dec 13 14:28:52.951522 systemd[1]: Starting motdgen.service... Dec 13 14:28:52.956891 systemd[1]: Started nvidia.service. Dec 13 14:28:52.960511 systemd[1]: Starting prepare-helm.service... Dec 13 14:28:52.963593 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:28:52.967299 systemd[1]: Starting sshd-keygen.service... Dec 13 14:28:52.971998 systemd[1]: Starting systemd-logind.service... Dec 13 14:28:52.975870 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:28:52.975949 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:28:52.980452 systemd[1]: Starting update-engine.service... Dec 13 14:28:52.984706 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:28:52.995702 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:28:52.997989 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:28:53.031072 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:28:53.031373 systemd[1]: Finished motdgen.service. Dec 13 14:28:53.033256 jq[1493]: false Dec 13 14:28:53.033840 jq[1513]: true Dec 13 14:28:53.034195 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:28:53.034489 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:28:53.069138 extend-filesystems[1494]: Found loop1 Dec 13 14:28:53.069138 extend-filesystems[1494]: Found sda Dec 13 14:28:53.069138 extend-filesystems[1494]: Found sda1 Dec 13 14:28:53.069138 extend-filesystems[1494]: Found sda2 Dec 13 14:28:53.069138 extend-filesystems[1494]: Found sda3 Dec 13 14:28:53.069138 extend-filesystems[1494]: Found usr Dec 13 14:28:53.069138 extend-filesystems[1494]: Found sda4 Dec 13 14:28:53.069138 extend-filesystems[1494]: Found sda6 Dec 13 14:28:53.099269 extend-filesystems[1494]: Found sda7 Dec 13 14:28:53.099269 extend-filesystems[1494]: Found sda9 Dec 13 14:28:53.099269 extend-filesystems[1494]: Checking size of /dev/sda9 Dec 13 14:28:53.118029 jq[1528]: true Dec 13 14:28:53.142759 tar[1516]: linux-amd64/helm Dec 13 14:28:53.160946 extend-filesystems[1494]: Old size kept for /dev/sda9 Dec 13 14:28:53.163693 extend-filesystems[1494]: Found sr0 Dec 13 14:28:53.166160 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:28:53.166457 systemd[1]: Finished extend-filesystems.service. Dec 13 14:28:53.183311 systemd-logind[1505]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:28:53.185279 systemd-logind[1505]: New seat seat0. Dec 13 14:28:53.194154 env[1521]: time="2024-12-13T14:28:53.194107235Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:28:53.230239 dbus-daemon[1492]: [system] SELinux support is enabled Dec 13 14:28:53.230460 systemd[1]: Started dbus.service. Dec 13 14:28:53.235067 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:28:53.235101 systemd[1]: Reached target system-config.target. Dec 13 14:28:53.237646 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:28:53.237677 systemd[1]: Reached target user-config.target. Dec 13 14:28:53.245216 systemd[1]: Started systemd-logind.service. Dec 13 14:28:53.247526 bash[1554]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:28:53.246456 dbus-daemon[1492]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:28:53.248034 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:28:53.301817 env[1521]: time="2024-12-13T14:28:53.301763534Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:28:53.302125 env[1521]: time="2024-12-13T14:28:53.302104633Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:28:53.308504 env[1521]: time="2024-12-13T14:28:53.308466216Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:28:53.309437 env[1521]: time="2024-12-13T14:28:53.309411913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:28:53.309950 env[1521]: time="2024-12-13T14:28:53.309923011Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:28:53.313753 env[1521]: time="2024-12-13T14:28:53.313706301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:28:53.313875 env[1521]: time="2024-12-13T14:28:53.313858600Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:28:53.313948 env[1521]: time="2024-12-13T14:28:53.313934800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:28:53.314143 env[1521]: time="2024-12-13T14:28:53.314126800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:28:53.314525 env[1521]: time="2024-12-13T14:28:53.314500799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:28:53.317060 env[1521]: time="2024-12-13T14:28:53.317032492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:28:53.317163 env[1521]: time="2024-12-13T14:28:53.317147291Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:28:53.317338 env[1521]: time="2024-12-13T14:28:53.317295991Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:28:53.317435 env[1521]: time="2024-12-13T14:28:53.317421690Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:28:53.331785 env[1521]: time="2024-12-13T14:28:53.329316557Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:28:53.331785 env[1521]: time="2024-12-13T14:28:53.329354857Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:28:53.331785 env[1521]: time="2024-12-13T14:28:53.329373657Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:28:53.331785 env[1521]: time="2024-12-13T14:28:53.329422257Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:28:53.331785 env[1521]: time="2024-12-13T14:28:53.329443157Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:28:53.331785 env[1521]: time="2024-12-13T14:28:53.329504757Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:28:53.331785 env[1521]: time="2024-12-13T14:28:53.329529257Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:28:53.331785 env[1521]: time="2024-12-13T14:28:53.329548557Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:28:53.331785 env[1521]: time="2024-12-13T14:28:53.329567357Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:28:53.331785 env[1521]: time="2024-12-13T14:28:53.329585356Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:28:53.331785 env[1521]: time="2024-12-13T14:28:53.329603156Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:28:53.331785 env[1521]: time="2024-12-13T14:28:53.329621556Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:28:53.331785 env[1521]: time="2024-12-13T14:28:53.329756356Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:28:53.331785 env[1521]: time="2024-12-13T14:28:53.329866356Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:28:53.332325 env[1521]: time="2024-12-13T14:28:53.330321354Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:28:53.332325 env[1521]: time="2024-12-13T14:28:53.330359254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:28:53.332325 env[1521]: time="2024-12-13T14:28:53.330380754Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:28:53.332325 env[1521]: time="2024-12-13T14:28:53.330437154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:28:53.332325 env[1521]: time="2024-12-13T14:28:53.330455754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:28:53.332325 env[1521]: time="2024-12-13T14:28:53.330471354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:28:53.332325 env[1521]: time="2024-12-13T14:28:53.330486954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:28:53.332325 env[1521]: time="2024-12-13T14:28:53.330502454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:28:53.332325 env[1521]: time="2024-12-13T14:28:53.330520354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:28:53.332325 env[1521]: time="2024-12-13T14:28:53.330538454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:28:53.332325 env[1521]: time="2024-12-13T14:28:53.330556154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:28:53.332325 env[1521]: time="2024-12-13T14:28:53.330576154Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:28:53.332325 env[1521]: time="2024-12-13T14:28:53.330710253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:28:53.332325 env[1521]: time="2024-12-13T14:28:53.330750553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:28:53.332325 env[1521]: time="2024-12-13T14:28:53.330768853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:28:53.332850 env[1521]: time="2024-12-13T14:28:53.330786653Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:28:53.332850 env[1521]: time="2024-12-13T14:28:53.330807453Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:28:53.332850 env[1521]: time="2024-12-13T14:28:53.330821253Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:28:53.332850 env[1521]: time="2024-12-13T14:28:53.330843253Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:28:53.332850 env[1521]: time="2024-12-13T14:28:53.330884453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:28:53.333028 env[1521]: time="2024-12-13T14:28:53.331145452Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:28:53.333028 env[1521]: time="2024-12-13T14:28:53.331217652Z" level=info msg="Connect containerd service" Dec 13 14:28:53.333028 env[1521]: time="2024-12-13T14:28:53.331262852Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:28:53.368575 env[1521]: time="2024-12-13T14:28:53.333360046Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:28:53.368575 env[1521]: time="2024-12-13T14:28:53.333645045Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:28:53.368575 env[1521]: time="2024-12-13T14:28:53.333688945Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:28:53.368575 env[1521]: time="2024-12-13T14:28:53.336959436Z" level=info msg="containerd successfully booted in 0.153696s" Dec 13 14:28:53.368575 env[1521]: time="2024-12-13T14:28:53.339187430Z" level=info msg="Start subscribing containerd event" Dec 13 14:28:53.368575 env[1521]: time="2024-12-13T14:28:53.339400929Z" level=info msg="Start recovering state" Dec 13 14:28:53.368575 env[1521]: time="2024-12-13T14:28:53.339485529Z" level=info msg="Start event monitor" Dec 13 14:28:53.368575 env[1521]: time="2024-12-13T14:28:53.339500829Z" level=info msg="Start snapshots syncer" Dec 13 14:28:53.368575 env[1521]: time="2024-12-13T14:28:53.339519229Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:28:53.368575 env[1521]: time="2024-12-13T14:28:53.339530329Z" level=info msg="Start streaming server" Dec 13 14:28:53.333875 systemd[1]: Started containerd.service. Dec 13 14:28:53.367183 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:28:53.860199 update_engine[1508]: I1213 14:28:53.859651 1508 main.cc:92] Flatcar Update Engine starting Dec 13 14:28:53.902946 systemd[1]: Started update-engine.service. Dec 13 14:28:53.904895 update_engine[1508]: I1213 14:28:53.904798 1508 update_check_scheduler.cc:74] Next update check in 11m38s Dec 13 14:28:53.908168 systemd[1]: Started locksmithd.service. Dec 13 14:28:54.006383 tar[1516]: linux-amd64/LICENSE Dec 13 14:28:54.006383 tar[1516]: linux-amd64/README.md Dec 13 14:28:54.020283 systemd[1]: Finished prepare-helm.service. Dec 13 14:28:54.386612 systemd[1]: Started kubelet.service. Dec 13 14:28:55.147383 sshd_keygen[1518]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:28:55.158549 kubelet[1614]: E1213 14:28:55.158493 1614 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:28:55.161019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:28:55.161234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:28:55.175790 systemd[1]: Finished sshd-keygen.service. Dec 13 14:28:55.180231 systemd[1]: Starting issuegen.service... Dec 13 14:28:55.185749 systemd[1]: Started waagent.service. Dec 13 14:28:55.190243 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:28:55.190505 systemd[1]: Finished issuegen.service. Dec 13 14:28:55.194431 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:28:55.216789 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:28:55.221858 systemd[1]: Started getty@tty1.service. Dec 13 14:28:55.225990 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:28:55.228767 systemd[1]: Reached target getty.target. Dec 13 14:28:55.230900 systemd[1]: Reached target multi-user.target. Dec 13 14:28:55.234449 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:28:55.243153 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:28:55.243367 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:28:55.248488 systemd[1]: Startup finished in 729ms (firmware) + 27.538s (loader) + 14.966s (kernel) + 23.056s (userspace) = 1min 6.291s. Dec 13 14:28:55.291520 locksmithd[1606]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:28:55.561901 login[1642]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:28:55.563696 login[1643]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:28:55.585829 systemd[1]: Created slice user-500.slice. Dec 13 14:28:55.587211 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:28:55.589371 systemd-logind[1505]: New session 2 of user core. Dec 13 14:28:55.592601 systemd-logind[1505]: New session 1 of user core. Dec 13 14:28:55.624500 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:28:55.626406 systemd[1]: Starting user@500.service... Dec 13 14:28:55.634843 (systemd)[1650]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:55.803166 systemd[1650]: Queued start job for default target default.target. Dec 13 14:28:55.803439 systemd[1650]: Reached target paths.target. Dec 13 14:28:55.803461 systemd[1650]: Reached target sockets.target. Dec 13 14:28:55.803478 systemd[1650]: Reached target timers.target. Dec 13 14:28:55.803492 systemd[1650]: Reached target basic.target. Dec 13 14:28:55.803547 systemd[1650]: Reached target default.target. Dec 13 14:28:55.803580 systemd[1650]: Startup finished in 163ms. Dec 13 14:28:55.803641 systemd[1]: Started user@500.service. Dec 13 14:28:55.804579 systemd[1]: Started session-1.scope. Dec 13 14:28:55.805274 systemd[1]: Started session-2.scope. Dec 13 14:29:01.449281 waagent[1635]: 2024-12-13T14:29:01.449161Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Dec 13 14:29:01.453512 waagent[1635]: 2024-12-13T14:29:01.453436Z INFO Daemon Daemon OS: flatcar 3510.3.6 Dec 13 14:29:01.456259 waagent[1635]: 2024-12-13T14:29:01.456195Z INFO Daemon Daemon Python: 3.9.16 Dec 13 14:29:01.459108 waagent[1635]: 2024-12-13T14:29:01.459037Z INFO Daemon Daemon Run daemon Dec 13 14:29:01.461551 waagent[1635]: 2024-12-13T14:29:01.461487Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.6' Dec 13 14:29:01.486281 waagent[1635]: 2024-12-13T14:29:01.486151Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:29:01.493181 waagent[1635]: 2024-12-13T14:29:01.493074Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:29:01.515676 waagent[1635]: 2024-12-13T14:29:01.493463Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:29:01.515676 waagent[1635]: 2024-12-13T14:29:01.494050Z INFO Daemon Daemon Using waagent for provisioning Dec 13 14:29:01.515676 waagent[1635]: 2024-12-13T14:29:01.495679Z INFO Daemon Daemon Activate resource disk Dec 13 14:29:01.515676 waagent[1635]: 2024-12-13T14:29:01.496509Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 14:29:01.515676 waagent[1635]: 2024-12-13T14:29:01.504231Z INFO Daemon Daemon Found device: None Dec 13 14:29:01.515676 waagent[1635]: 2024-12-13T14:29:01.505195Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 14:29:01.515676 waagent[1635]: 2024-12-13T14:29:01.506039Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 14:29:01.515676 waagent[1635]: 2024-12-13T14:29:01.507710Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:29:01.515676 waagent[1635]: 2024-12-13T14:29:01.508897Z INFO Daemon Daemon Running default provisioning handler Dec 13 14:29:01.525761 waagent[1635]: 2024-12-13T14:29:01.525624Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:29:01.533664 waagent[1635]: 2024-12-13T14:29:01.533554Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:29:01.538546 waagent[1635]: 2024-12-13T14:29:01.538482Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:29:01.541169 waagent[1635]: 2024-12-13T14:29:01.541109Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 14:29:01.671886 waagent[1635]: 2024-12-13T14:29:01.671680Z INFO Daemon Daemon Successfully mounted dvd Dec 13 14:29:01.753281 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 14:29:01.773526 waagent[1635]: 2024-12-13T14:29:01.773396Z INFO Daemon Daemon Detect protocol endpoint Dec 13 14:29:01.776860 waagent[1635]: 2024-12-13T14:29:01.776786Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:29:01.780533 waagent[1635]: 2024-12-13T14:29:01.780467Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 14:29:01.784496 waagent[1635]: 2024-12-13T14:29:01.784433Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 14:29:01.787362 waagent[1635]: 2024-12-13T14:29:01.787300Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 14:29:01.790222 waagent[1635]: 2024-12-13T14:29:01.790160Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 14:29:01.899749 waagent[1635]: 2024-12-13T14:29:01.899659Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 14:29:01.908637 waagent[1635]: 2024-12-13T14:29:01.900598Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 14:29:01.908637 waagent[1635]: 2024-12-13T14:29:01.901643Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 14:29:02.215109 waagent[1635]: 2024-12-13T14:29:02.214955Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 14:29:02.224599 waagent[1635]: 2024-12-13T14:29:02.224523Z INFO Daemon Daemon Forcing an update of the goal state.. Dec 13 14:29:02.229903 waagent[1635]: 2024-12-13T14:29:02.224915Z INFO Daemon Daemon Fetching goal state [incarnation 1] Dec 13 14:29:02.303503 waagent[1635]: 2024-12-13T14:29:02.303376Z INFO Daemon Daemon Found private key matching thumbprint F5B48523D9751B751F44C59ED806E29576498436 Dec 13 14:29:02.307663 waagent[1635]: 2024-12-13T14:29:02.307555Z INFO Daemon Daemon Certificate with thumbprint 88D7B4B29E821ECBE2BBCAF9E4E430AF3EBCD0FA has no matching private key. Dec 13 14:29:02.314437 waagent[1635]: 2024-12-13T14:29:02.307974Z INFO Daemon Daemon Fetch goal state completed Dec 13 14:29:02.360994 waagent[1635]: 2024-12-13T14:29:02.360914Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: a1f2102b-fc24-4a5d-8881-1a44ed448374 New eTag: 1339717585619165715] Dec 13 14:29:02.369057 waagent[1635]: 2024-12-13T14:29:02.361881Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:29:02.372846 waagent[1635]: 2024-12-13T14:29:02.372790Z INFO Daemon Daemon Starting provisioning Dec 13 14:29:02.381896 waagent[1635]: 2024-12-13T14:29:02.373157Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 14:29:02.381896 waagent[1635]: 2024-12-13T14:29:02.374227Z INFO Daemon Daemon Set hostname [ci-3510.3.6-a-e445ccd8ad] Dec 13 14:29:02.391325 waagent[1635]: 2024-12-13T14:29:02.391207Z INFO Daemon Daemon Publish hostname [ci-3510.3.6-a-e445ccd8ad] Dec 13 14:29:02.395010 waagent[1635]: 2024-12-13T14:29:02.394936Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 14:29:02.398555 waagent[1635]: 2024-12-13T14:29:02.398489Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 14:29:02.412965 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Dec 13 14:29:02.413269 systemd[1]: Stopped systemd-networkd-wait-online.service. Dec 13 14:29:02.413344 systemd[1]: Stopping systemd-networkd-wait-online.service... Dec 13 14:29:02.413631 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:29:02.418774 systemd-networkd[1228]: eth0: DHCPv6 lease lost Dec 13 14:29:02.420113 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:29:02.420426 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:29:02.423553 systemd[1]: Starting systemd-networkd.service... Dec 13 14:29:02.460037 systemd-networkd[1701]: enP61560s1: Link UP Dec 13 14:29:02.460048 systemd-networkd[1701]: enP61560s1: Gained carrier Dec 13 14:29:02.461344 systemd-networkd[1701]: eth0: Link UP Dec 13 14:29:02.461354 systemd-networkd[1701]: eth0: Gained carrier Dec 13 14:29:02.461793 systemd-networkd[1701]: lo: Link UP Dec 13 14:29:02.461802 systemd-networkd[1701]: lo: Gained carrier Dec 13 14:29:02.462115 systemd-networkd[1701]: eth0: Gained IPv6LL Dec 13 14:29:02.462383 systemd-networkd[1701]: Enumeration completed Dec 13 14:29:02.462518 systemd[1]: Started systemd-networkd.service. Dec 13 14:29:02.464949 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:29:02.466877 waagent[1635]: 2024-12-13T14:29:02.466665Z INFO Daemon Daemon Create user account if not exists Dec 13 14:29:02.470847 waagent[1635]: 2024-12-13T14:29:02.470764Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 14:29:02.472777 systemd-networkd[1701]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:29:02.473962 waagent[1635]: 2024-12-13T14:29:02.473891Z INFO Daemon Daemon Configure sudoer Dec 13 14:29:02.476819 waagent[1635]: 2024-12-13T14:29:02.476760Z INFO Daemon Daemon Configure sshd Dec 13 14:29:02.479199 waagent[1635]: 2024-12-13T14:29:02.479122Z INFO Daemon Daemon Deploy ssh public key. Dec 13 14:29:02.510814 systemd-networkd[1701]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:29:02.513543 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:29:03.587523 waagent[1635]: 2024-12-13T14:29:03.587426Z INFO Daemon Daemon Provisioning complete Dec 13 14:29:03.605094 waagent[1635]: 2024-12-13T14:29:03.605016Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 14:29:03.608476 waagent[1635]: 2024-12-13T14:29:03.608407Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 14:29:03.614163 waagent[1635]: 2024-12-13T14:29:03.614097Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Dec 13 14:29:03.879344 waagent[1711]: 2024-12-13T14:29:03.879178Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Dec 13 14:29:03.880092 waagent[1711]: 2024-12-13T14:29:03.880023Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:29:03.880236 waagent[1711]: 2024-12-13T14:29:03.880184Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:29:03.891652 waagent[1711]: 2024-12-13T14:29:03.891578Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Dec 13 14:29:03.891826 waagent[1711]: 2024-12-13T14:29:03.891772Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Dec 13 14:29:03.952450 waagent[1711]: 2024-12-13T14:29:03.952324Z INFO ExtHandler ExtHandler Found private key matching thumbprint F5B48523D9751B751F44C59ED806E29576498436 Dec 13 14:29:03.952670 waagent[1711]: 2024-12-13T14:29:03.952612Z INFO ExtHandler ExtHandler Certificate with thumbprint 88D7B4B29E821ECBE2BBCAF9E4E430AF3EBCD0FA has no matching private key. Dec 13 14:29:03.952930 waagent[1711]: 2024-12-13T14:29:03.952877Z INFO ExtHandler ExtHandler Fetch goal state completed Dec 13 14:29:03.967589 waagent[1711]: 2024-12-13T14:29:03.967527Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: c05cc23b-e081-4aa9-9702-8b0514bdf83e New eTag: 1339717585619165715] Dec 13 14:29:03.968121 waagent[1711]: 2024-12-13T14:29:03.968062Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:29:04.049903 waagent[1711]: 2024-12-13T14:29:04.049752Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:29:04.077460 waagent[1711]: 2024-12-13T14:29:04.077363Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1711 Dec 13 14:29:04.125891 waagent[1711]: 2024-12-13T14:29:04.125790Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:29:04.127175 waagent[1711]: 2024-12-13T14:29:04.127112Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:29:04.244514 waagent[1711]: 2024-12-13T14:29:04.244374Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:29:04.245024 waagent[1711]: 2024-12-13T14:29:04.244943Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:29:04.253486 waagent[1711]: 2024-12-13T14:29:04.253431Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:29:04.253971 waagent[1711]: 2024-12-13T14:29:04.253909Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:29:04.255030 waagent[1711]: 2024-12-13T14:29:04.254966Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Dec 13 14:29:04.256274 waagent[1711]: 2024-12-13T14:29:04.256215Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:29:04.256669 waagent[1711]: 2024-12-13T14:29:04.256614Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:29:04.256839 waagent[1711]: 2024-12-13T14:29:04.256792Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:29:04.257347 waagent[1711]: 2024-12-13T14:29:04.257291Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:29:04.257622 waagent[1711]: 2024-12-13T14:29:04.257566Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:29:04.257622 waagent[1711]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:29:04.257622 waagent[1711]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:29:04.257622 waagent[1711]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:29:04.257622 waagent[1711]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:29:04.257622 waagent[1711]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:29:04.257622 waagent[1711]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:29:04.260712 waagent[1711]: 2024-12-13T14:29:04.260513Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:29:04.261486 waagent[1711]: 2024-12-13T14:29:04.261432Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:29:04.261644 waagent[1711]: 2024-12-13T14:29:04.261584Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:29:04.261925 waagent[1711]: 2024-12-13T14:29:04.261873Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:29:04.262435 waagent[1711]: 2024-12-13T14:29:04.262382Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:29:04.263164 waagent[1711]: 2024-12-13T14:29:04.263108Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:29:04.263316 waagent[1711]: 2024-12-13T14:29:04.263270Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:29:04.263456 waagent[1711]: 2024-12-13T14:29:04.263414Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:29:04.264368 waagent[1711]: 2024-12-13T14:29:04.264306Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:29:04.264711 waagent[1711]: 2024-12-13T14:29:04.264645Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:29:04.265328 waagent[1711]: 2024-12-13T14:29:04.265278Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:29:04.276745 waagent[1711]: 2024-12-13T14:29:04.276682Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Dec 13 14:29:04.277324 waagent[1711]: 2024-12-13T14:29:04.277276Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:29:04.278128 waagent[1711]: 2024-12-13T14:29:04.278069Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Dec 13 14:29:04.301572 waagent[1711]: 2024-12-13T14:29:04.301480Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1701' Dec 13 14:29:04.325603 waagent[1711]: 2024-12-13T14:29:04.325520Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Dec 13 14:29:04.407596 waagent[1711]: 2024-12-13T14:29:04.407507Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:29:04.407596 waagent[1711]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:29:04.407596 waagent[1711]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:29:04.407596 waagent[1711]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:35:fc:ae brd ff:ff:ff:ff:ff:ff Dec 13 14:29:04.407596 waagent[1711]: 3: enP61560s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:35:fc:ae brd ff:ff:ff:ff:ff:ff\ altname enP61560p0s2 Dec 13 14:29:04.407596 waagent[1711]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:29:04.407596 waagent[1711]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:29:04.407596 waagent[1711]: 2: eth0 inet 10.200.8.17/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:29:04.407596 waagent[1711]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:29:04.407596 waagent[1711]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:29:04.407596 waagent[1711]: 2: eth0 inet6 fe80::7e1e:52ff:fe35:fcae/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:29:04.621940 waagent[1711]: 2024-12-13T14:29:04.621870Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Dec 13 14:29:05.310877 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:29:05.311196 systemd[1]: Stopped kubelet.service. Dec 13 14:29:05.313317 systemd[1]: Starting kubelet.service... Dec 13 14:29:05.399752 systemd[1]: Started kubelet.service. Dec 13 14:29:05.618431 waagent[1635]: 2024-12-13T14:29:05.618177Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Dec 13 14:29:05.624693 waagent[1635]: 2024-12-13T14:29:05.624630Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Dec 13 14:29:05.996980 kubelet[1748]: E1213 14:29:05.996864 1748 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:29:06.001813 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:29:06.002018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:29:06.941554 waagent[1754]: 2024-12-13T14:29:06.941445Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Dec 13 14:29:06.942320 waagent[1754]: 2024-12-13T14:29:06.942239Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.6 Dec 13 14:29:06.942468 waagent[1754]: 2024-12-13T14:29:06.942416Z INFO ExtHandler ExtHandler Python: 3.9.16 Dec 13 14:29:06.942617 waagent[1754]: 2024-12-13T14:29:06.942569Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 13 14:29:06.952169 waagent[1754]: 2024-12-13T14:29:06.952076Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:29:06.952547 waagent[1754]: 2024-12-13T14:29:06.952492Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:29:06.952707 waagent[1754]: 2024-12-13T14:29:06.952659Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:29:06.964452 waagent[1754]: 2024-12-13T14:29:06.964384Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 14:29:06.973164 waagent[1754]: 2024-12-13T14:29:06.973102Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 14:29:06.974063 waagent[1754]: 2024-12-13T14:29:06.974005Z INFO ExtHandler Dec 13 14:29:06.974211 waagent[1754]: 2024-12-13T14:29:06.974162Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b942cddb-9608-42d6-8c0b-41968ca5a850 eTag: 1339717585619165715 source: Fabric] Dec 13 14:29:06.974929 waagent[1754]: 2024-12-13T14:29:06.974871Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 14:29:06.976022 waagent[1754]: 2024-12-13T14:29:06.975964Z INFO ExtHandler Dec 13 14:29:06.976156 waagent[1754]: 2024-12-13T14:29:06.976109Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 14:29:06.982630 waagent[1754]: 2024-12-13T14:29:06.982580Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 14:29:06.983054 waagent[1754]: 2024-12-13T14:29:06.983006Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:29:07.003035 waagent[1754]: 2024-12-13T14:29:07.002977Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Dec 13 14:29:07.066231 waagent[1754]: 2024-12-13T14:29:07.066107Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F5B48523D9751B751F44C59ED806E29576498436', 'hasPrivateKey': True} Dec 13 14:29:07.067183 waagent[1754]: 2024-12-13T14:29:07.067114Z INFO ExtHandler Downloaded certificate {'thumbprint': '88D7B4B29E821ECBE2BBCAF9E4E430AF3EBCD0FA', 'hasPrivateKey': False} Dec 13 14:29:07.068189 waagent[1754]: 2024-12-13T14:29:07.068129Z INFO ExtHandler Fetch goal state completed Dec 13 14:29:07.089314 waagent[1754]: 2024-12-13T14:29:07.089222Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Dec 13 14:29:07.100569 waagent[1754]: 2024-12-13T14:29:07.100490Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1754 Dec 13 14:29:07.103543 waagent[1754]: 2024-12-13T14:29:07.103482Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:29:07.104482 waagent[1754]: 2024-12-13T14:29:07.104424Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 13 14:29:07.104782 waagent[1754]: 2024-12-13T14:29:07.104709Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 13 14:29:07.106697 waagent[1754]: 2024-12-13T14:29:07.106640Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:29:07.111264 waagent[1754]: 2024-12-13T14:29:07.111212Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:29:07.111615 waagent[1754]: 2024-12-13T14:29:07.111560Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:29:07.119476 waagent[1754]: 2024-12-13T14:29:07.119425Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:29:07.119933 waagent[1754]: 2024-12-13T14:29:07.119880Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:29:07.125631 waagent[1754]: 2024-12-13T14:29:07.125544Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 14:29:07.126656 waagent[1754]: 2024-12-13T14:29:07.126592Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 13 14:29:07.128087 waagent[1754]: 2024-12-13T14:29:07.128029Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:29:07.128613 waagent[1754]: 2024-12-13T14:29:07.128559Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:29:07.128804 waagent[1754]: 2024-12-13T14:29:07.128750Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:29:07.129213 waagent[1754]: 2024-12-13T14:29:07.129159Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:29:07.130000 waagent[1754]: 2024-12-13T14:29:07.129944Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:29:07.130497 waagent[1754]: 2024-12-13T14:29:07.130440Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:29:07.130620 waagent[1754]: 2024-12-13T14:29:07.130550Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:29:07.130845 waagent[1754]: 2024-12-13T14:29:07.130793Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:29:07.131116 waagent[1754]: 2024-12-13T14:29:07.131063Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:29:07.131116 waagent[1754]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:29:07.131116 waagent[1754]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:29:07.131116 waagent[1754]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:29:07.131116 waagent[1754]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:29:07.131116 waagent[1754]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:29:07.131116 waagent[1754]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:29:07.133535 waagent[1754]: 2024-12-13T14:29:07.133443Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:29:07.134177 waagent[1754]: 2024-12-13T14:29:07.134098Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:29:07.136965 waagent[1754]: 2024-12-13T14:29:07.136787Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:29:07.137156 waagent[1754]: 2024-12-13T14:29:07.137084Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:29:07.140988 waagent[1754]: 2024-12-13T14:29:07.140683Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:29:07.141284 waagent[1754]: 2024-12-13T14:29:07.141207Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:29:07.141659 waagent[1754]: 2024-12-13T14:29:07.141585Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:29:07.156249 waagent[1754]: 2024-12-13T14:29:07.156189Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:29:07.156249 waagent[1754]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:29:07.156249 waagent[1754]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:29:07.156249 waagent[1754]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:35:fc:ae brd ff:ff:ff:ff:ff:ff Dec 13 14:29:07.156249 waagent[1754]: 3: enP61560s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:35:fc:ae brd ff:ff:ff:ff:ff:ff\ altname enP61560p0s2 Dec 13 14:29:07.156249 waagent[1754]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:29:07.156249 waagent[1754]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:29:07.156249 waagent[1754]: 2: eth0 inet 10.200.8.17/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:29:07.156249 waagent[1754]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:29:07.156249 waagent[1754]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:29:07.156249 waagent[1754]: 2: eth0 inet6 fe80::7e1e:52ff:fe35:fcae/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:29:07.167477 waagent[1754]: 2024-12-13T14:29:07.167403Z INFO ExtHandler ExtHandler Downloading agent manifest Dec 13 14:29:07.200462 waagent[1754]: 2024-12-13T14:29:07.200361Z INFO ExtHandler ExtHandler Dec 13 14:29:07.200830 waagent[1754]: 2024-12-13T14:29:07.200773Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: ed7a0c74-6e25-4a00-bd1e-8862149e36b8 correlation a8028d61-b56a-4b34-a42a-6f3226913cb5 created: 2024-12-13T14:27:39.198067Z] Dec 13 14:29:07.201672 waagent[1754]: 2024-12-13T14:29:07.201615Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 14:29:07.203632 waagent[1754]: 2024-12-13T14:29:07.203579Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Dec 13 14:29:07.229897 waagent[1754]: 2024-12-13T14:29:07.229836Z INFO ExtHandler ExtHandler Looking for existing remote access users. Dec 13 14:29:07.245206 waagent[1754]: 2024-12-13T14:29:07.245151Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 270CCB97-C4B4-4EE1-AF50-141E7E4626D8;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Dec 13 14:29:07.353869 waagent[1754]: 2024-12-13T14:29:07.353747Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 13 14:29:07.353869 waagent[1754]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:29:07.353869 waagent[1754]: pkts bytes target prot opt in out source destination Dec 13 14:29:07.353869 waagent[1754]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:29:07.353869 waagent[1754]: pkts bytes target prot opt in out source destination Dec 13 14:29:07.353869 waagent[1754]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:29:07.353869 waagent[1754]: pkts bytes target prot opt in out source destination Dec 13 14:29:07.353869 waagent[1754]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 14:29:07.353869 waagent[1754]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:29:07.353869 waagent[1754]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:29:07.361023 waagent[1754]: 2024-12-13T14:29:07.360924Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 14:29:07.361023 waagent[1754]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:29:07.361023 waagent[1754]: pkts bytes target prot opt in out source destination Dec 13 14:29:07.361023 waagent[1754]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:29:07.361023 waagent[1754]: pkts bytes target prot opt in out source destination Dec 13 14:29:07.361023 waagent[1754]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:29:07.361023 waagent[1754]: pkts bytes target prot opt in out source destination Dec 13 14:29:07.361023 waagent[1754]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 14:29:07.361023 waagent[1754]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:29:07.361023 waagent[1754]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:29:07.361591 waagent[1754]: 2024-12-13T14:29:07.361538Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 14:29:16.060680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:29:16.060986 systemd[1]: Stopped kubelet.service. Dec 13 14:29:16.062836 systemd[1]: Starting kubelet.service... Dec 13 14:29:16.147022 systemd[1]: Started kubelet.service. Dec 13 14:29:16.721052 kubelet[1814]: E1213 14:29:16.720991 1814 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:29:16.723037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:29:16.723241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:29:26.810601 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:29:26.810895 systemd[1]: Stopped kubelet.service. Dec 13 14:29:26.812797 systemd[1]: Starting kubelet.service... Dec 13 14:29:26.896511 systemd[1]: Started kubelet.service. Dec 13 14:29:27.444680 kubelet[1830]: E1213 14:29:27.444622 1830 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:29:27.446563 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:29:27.446786 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:29:32.907473 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 13 14:29:37.560985 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 14:29:37.561284 systemd[1]: Stopped kubelet.service. Dec 13 14:29:37.563471 systemd[1]: Starting kubelet.service... Dec 13 14:29:37.647342 systemd[1]: Started kubelet.service. Dec 13 14:29:38.204248 kubelet[1844]: E1213 14:29:38.204188 1844 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:29:38.205924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:29:38.206110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:29:39.263673 update_engine[1508]: I1213 14:29:39.263585 1508 update_attempter.cc:509] Updating boot flags... Dec 13 14:29:46.550102 systemd[1]: Created slice system-sshd.slice. Dec 13 14:29:46.551769 systemd[1]: Started sshd@0-10.200.8.17:22-10.200.16.10:52114.service. Dec 13 14:29:47.427317 sshd[1921]: Accepted publickey for core from 10.200.16.10 port 52114 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:29:47.428893 sshd[1921]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:47.433652 systemd[1]: Started session-3.scope. Dec 13 14:29:47.433910 systemd-logind[1505]: New session 3 of user core. Dec 13 14:29:48.052907 systemd[1]: Started sshd@1-10.200.8.17:22-10.200.16.10:52126.service. Dec 13 14:29:48.310940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 14:29:48.311222 systemd[1]: Stopped kubelet.service. Dec 13 14:29:48.313920 systemd[1]: Starting kubelet.service... Dec 13 14:29:48.399202 systemd[1]: Started kubelet.service. Dec 13 14:29:48.444024 kubelet[1935]: E1213 14:29:48.443982 1935 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:29:48.445552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:29:48.445740 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:29:48.758637 sshd[1926]: Accepted publickey for core from 10.200.16.10 port 52126 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:29:48.760605 sshd[1926]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:48.765761 systemd[1]: Started session-4.scope. Dec 13 14:29:48.766017 systemd-logind[1505]: New session 4 of user core. Dec 13 14:29:49.263634 sshd[1926]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:49.266487 systemd[1]: sshd@1-10.200.8.17:22-10.200.16.10:52126.service: Deactivated successfully. Dec 13 14:29:49.267990 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:29:49.268031 systemd-logind[1505]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:29:49.269637 systemd-logind[1505]: Removed session 4. Dec 13 14:29:49.381871 systemd[1]: Started sshd@2-10.200.8.17:22-10.200.16.10:46448.service. Dec 13 14:29:50.087961 sshd[1949]: Accepted publickey for core from 10.200.16.10 port 46448 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:29:50.089644 sshd[1949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:50.094500 systemd[1]: Started session-5.scope. Dec 13 14:29:50.094803 systemd-logind[1505]: New session 5 of user core. Dec 13 14:29:50.584793 sshd[1949]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:50.588061 systemd[1]: sshd@2-10.200.8.17:22-10.200.16.10:46448.service: Deactivated successfully. Dec 13 14:29:50.589260 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:29:50.590583 systemd-logind[1505]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:29:50.591537 systemd-logind[1505]: Removed session 5. Dec 13 14:29:50.702077 systemd[1]: Started sshd@3-10.200.8.17:22-10.200.16.10:46462.service. Dec 13 14:29:51.409287 sshd[1956]: Accepted publickey for core from 10.200.16.10 port 46462 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:29:51.410982 sshd[1956]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:51.416565 systemd[1]: Started session-6.scope. Dec 13 14:29:51.416823 systemd-logind[1505]: New session 6 of user core. Dec 13 14:29:51.910520 sshd[1956]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:51.913658 systemd[1]: sshd@3-10.200.8.17:22-10.200.16.10:46462.service: Deactivated successfully. Dec 13 14:29:51.914990 systemd-logind[1505]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:29:51.915078 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:29:51.916529 systemd-logind[1505]: Removed session 6. Dec 13 14:29:52.046401 systemd[1]: Started sshd@4-10.200.8.17:22-10.200.16.10:46478.service. Dec 13 14:29:52.753858 sshd[1963]: Accepted publickey for core from 10.200.16.10 port 46478 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:29:52.755522 sshd[1963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:52.761348 systemd[1]: Started session-7.scope. Dec 13 14:29:52.761597 systemd-logind[1505]: New session 7 of user core. Dec 13 14:29:53.460888 sudo[1967]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 14:29:53.461199 sudo[1967]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:29:53.480955 dbus-daemon[1492]: н\x9c\xeaeU: received setenforce notice (enforcing=398969920) Dec 13 14:29:53.483125 sudo[1967]: pam_unix(sudo:session): session closed for user root Dec 13 14:29:53.619110 sshd[1963]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:53.622812 systemd[1]: sshd@4-10.200.8.17:22-10.200.16.10:46478.service: Deactivated successfully. Dec 13 14:29:53.624356 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:29:53.624935 systemd-logind[1505]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:29:53.626164 systemd-logind[1505]: Removed session 7. Dec 13 14:29:53.735902 systemd[1]: Started sshd@5-10.200.8.17:22-10.200.16.10:46480.service. Dec 13 14:29:54.442058 sshd[1971]: Accepted publickey for core from 10.200.16.10 port 46480 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:29:54.443840 sshd[1971]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:54.448807 systemd-logind[1505]: New session 8 of user core. Dec 13 14:29:54.449006 systemd[1]: Started session-8.scope. Dec 13 14:29:54.828260 sudo[1976]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 14:29:54.828558 sudo[1976]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:29:54.831378 sudo[1976]: pam_unix(sudo:session): session closed for user root Dec 13 14:29:54.835926 sudo[1975]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 14:29:54.836204 sudo[1975]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:29:54.844893 systemd[1]: Stopping audit-rules.service... Dec 13 14:29:54.844000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 14:29:54.847000 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 14:29:54.848926 auditctl[1979]: No rules Dec 13 14:29:54.847206 systemd[1]: Stopped audit-rules.service. Dec 13 14:29:54.849691 kernel: kauditd_printk_skb: 80 callbacks suppressed Dec 13 14:29:54.849761 kernel: audit: type=1305 audit(1734100194.844:162): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 14:29:54.849256 systemd[1]: Starting audit-rules.service... Dec 13 14:29:54.844000 audit[1979]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc7fd84170 a2=420 a3=0 items=0 ppid=1 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:54.871738 kernel: audit: type=1300 audit(1734100194.844:162): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc7fd84170 a2=420 a3=0 items=0 ppid=1 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:54.844000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Dec 13 14:29:54.876172 kernel: audit: type=1327 audit(1734100194.844:162): proctitle=2F7362696E2F617564697463746C002D44 Dec 13 14:29:54.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:54.880240 augenrules[1997]: No rules Dec 13 14:29:54.880991 systemd[1]: Finished audit-rules.service. Dec 13 14:29:54.882922 sudo[1975]: pam_unix(sudo:session): session closed for user root Dec 13 14:29:54.886800 kernel: audit: type=1131 audit(1734100194.844:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:54.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:54.897025 kernel: audit: type=1130 audit(1734100194.875:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:54.897743 kernel: audit: type=1106 audit(1734100194.880:165): pid=1975 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:29:54.880000 audit[1975]: USER_END pid=1975 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:29:54.880000 audit[1975]: CRED_DISP pid=1975 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:29:54.909735 kernel: audit: type=1104 audit(1734100194.880:166): pid=1975 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:29:55.003057 sshd[1971]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:55.002000 audit[1971]: USER_END pid=1971 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:29:55.010828 systemd[1]: sshd@5-10.200.8.17:22-10.200.16.10:46480.service: Deactivated successfully. Dec 13 14:29:55.011507 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:29:55.012787 systemd-logind[1505]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:29:55.013635 systemd-logind[1505]: Removed session 8. Dec 13 14:29:55.002000 audit[1971]: CRED_DISP pid=1971 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:29:55.032825 kernel: audit: type=1106 audit(1734100195.002:167): pid=1971 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:29:55.032888 kernel: audit: type=1104 audit(1734100195.002:168): pid=1971 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:29:55.032918 kernel: audit: type=1131 audit(1734100195.008:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.17:22-10.200.16.10:46480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:55.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.17:22-10.200.16.10:46480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:55.120453 systemd[1]: Started sshd@6-10.200.8.17:22-10.200.16.10:46494.service. Dec 13 14:29:55.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.17:22-10.200.16.10:46494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:55.825000 audit[2004]: USER_ACCT pid=2004 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:29:55.827518 sshd[2004]: Accepted publickey for core from 10.200.16.10 port 46494 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:29:55.827000 audit[2004]: CRED_ACQ pid=2004 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:29:55.827000 audit[2004]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb1f097a0 a2=3 a3=0 items=0 ppid=1 pid=2004 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:55.827000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:29:55.829277 sshd[2004]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:55.834617 systemd[1]: Started session-9.scope. Dec 13 14:29:55.835022 systemd-logind[1505]: New session 9 of user core. Dec 13 14:29:55.838000 audit[2004]: USER_START pid=2004 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:29:55.840000 audit[2007]: CRED_ACQ pid=2007 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:29:56.212000 audit[2008]: USER_ACCT pid=2008 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:29:56.212000 audit[2008]: CRED_REFR pid=2008 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:29:56.213244 sudo[2008]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:29:56.213564 sudo[2008]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:29:56.214000 audit[2008]: USER_START pid=2008 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:29:56.250514 systemd[1]: Starting docker.service... Dec 13 14:29:56.286665 env[2018]: time="2024-12-13T14:29:56.286614386Z" level=info msg="Starting up" Dec 13 14:29:56.287971 env[2018]: time="2024-12-13T14:29:56.287942606Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:29:56.287971 env[2018]: time="2024-12-13T14:29:56.287961807Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:29:56.288180 env[2018]: time="2024-12-13T14:29:56.287988207Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc Dec 13 14:29:56.288180 env[2018]: time="2024-12-13T14:29:56.288001207Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:29:56.290097 env[2018]: time="2024-12-13T14:29:56.289710134Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:29:56.290097 env[2018]: time="2024-12-13T14:29:56.290084040Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:29:56.290240 env[2018]: time="2024-12-13T14:29:56.290104740Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc Dec 13 14:29:56.290240 env[2018]: time="2024-12-13T14:29:56.290116440Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:29:56.297526 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3468149991-merged.mount: Deactivated successfully. Dec 13 14:29:56.357253 env[2018]: time="2024-12-13T14:29:56.357213491Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 14:29:56.357253 env[2018]: time="2024-12-13T14:29:56.357237892Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 14:29:56.357507 env[2018]: time="2024-12-13T14:29:56.357469295Z" level=info msg="Loading containers: start." Dec 13 14:29:56.408000 audit[2046]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.408000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffed1928a60 a2=0 a3=7ffed1928a4c items=0 ppid=2018 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.408000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 14:29:56.410000 audit[2048]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.410000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff80197f70 a2=0 a3=7fff80197f5c items=0 ppid=2018 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.410000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 14:29:56.412000 audit[2050]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.412000 audit[2050]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe1888ba30 a2=0 a3=7ffe1888ba1c items=0 ppid=2018 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.412000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 14:29:56.414000 audit[2052]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.414000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe4bf37e40 a2=0 a3=7ffe4bf37e2c items=0 ppid=2018 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.414000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 14:29:56.416000 audit[2054]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.416000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcc4108680 a2=0 a3=7ffcc410866c items=0 ppid=2018 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.416000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Dec 13 14:29:56.418000 audit[2056]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.418000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe154be570 a2=0 a3=7ffe154be55c items=0 ppid=2018 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.418000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Dec 13 14:29:56.473000 audit[2058]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.473000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff5f726130 a2=0 a3=7fff5f72611c items=0 ppid=2018 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.473000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 14:29:56.475000 audit[2060]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.475000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcf0b79ac0 a2=0 a3=7ffcf0b79aac items=0 ppid=2018 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.475000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 14:29:56.477000 audit[2062]: NETFILTER_CFG table=filter:13 family=2 entries=2 op=nft_register_chain pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.477000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffc8ecbc800 a2=0 a3=7ffc8ecbc7ec items=0 ppid=2018 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.477000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:29:56.493000 audit[2066]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_unregister_rule pid=2066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.493000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd37fee7c0 a2=0 a3=7ffd37fee7ac items=0 ppid=2018 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.493000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:29:56.498000 audit[2067]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.498000 audit[2067]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe6dba6620 a2=0 a3=7ffe6dba660c items=0 ppid=2018 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.498000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:29:56.525746 kernel: Initializing XFRM netlink socket Dec 13 14:29:56.548306 env[2018]: time="2024-12-13T14:29:56.548264883Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:29:56.616000 audit[2075]: NETFILTER_CFG table=nat:16 family=2 entries=2 op=nft_register_chain pid=2075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.616000 audit[2075]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffc6e39a180 a2=0 a3=7ffc6e39a16c items=0 ppid=2018 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.616000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 14:29:56.637000 audit[2078]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_rule pid=2078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.637000 audit[2078]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffdc6cbd7f0 a2=0 a3=7ffdc6cbd7dc items=0 ppid=2018 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.637000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 14:29:56.640000 audit[2081]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_rule pid=2081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.640000 audit[2081]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcc3b2f1b0 a2=0 a3=7ffcc3b2f19c items=0 ppid=2018 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.640000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Dec 13 14:29:56.642000 audit[2083]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.642000 audit[2083]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc2b757180 a2=0 a3=7ffc2b75716c items=0 ppid=2018 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.642000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Dec 13 14:29:56.644000 audit[2085]: NETFILTER_CFG table=nat:20 family=2 entries=2 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.644000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffe8dac4da0 a2=0 a3=7ffe8dac4d8c items=0 ppid=2018 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.644000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 14:29:56.646000 audit[2087]: NETFILTER_CFG table=nat:21 family=2 entries=2 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.646000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffe939ccdb0 a2=0 a3=7ffe939ccd9c items=0 ppid=2018 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.646000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 14:29:56.648000 audit[2089]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.648000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffe80af61a0 a2=0 a3=7ffe80af618c items=0 ppid=2018 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.648000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Dec 13 14:29:56.650000 audit[2091]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.650000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffd4382fca0 a2=0 a3=7ffd4382fc8c items=0 ppid=2018 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.650000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 14:29:56.652000 audit[2093]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.652000 audit[2093]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffdc5a1eb80 a2=0 a3=7ffdc5a1eb6c items=0 ppid=2018 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.652000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 14:29:56.654000 audit[2095]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.654000 audit[2095]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe80791230 a2=0 a3=7ffe8079121c items=0 ppid=2018 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.654000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 14:29:56.656000 audit[2097]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.656000 audit[2097]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc71b50e60 a2=0 a3=7ffc71b50e4c items=0 ppid=2018 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.656000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 14:29:56.657545 systemd-networkd[1701]: docker0: Link UP Dec 13 14:29:56.684000 audit[2101]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_unregister_rule pid=2101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.684000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc89019d70 a2=0 a3=7ffc89019d5c items=0 ppid=2018 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.684000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:29:56.688000 audit[2102]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_rule pid=2102 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:29:56.688000 audit[2102]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff8fe07310 a2=0 a3=7fff8fe072fc items=0 ppid=2018 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:56.688000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:29:56.689994 env[2018]: time="2024-12-13T14:29:56.689964402Z" level=info msg="Loading containers: done." Dec 13 14:29:56.704236 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck257571483-merged.mount: Deactivated successfully. Dec 13 14:29:56.731392 env[2018]: time="2024-12-13T14:29:56.731297450Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:29:56.731770 env[2018]: time="2024-12-13T14:29:56.731747257Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:29:56.731975 env[2018]: time="2024-12-13T14:29:56.731959260Z" level=info msg="Daemon has completed initialization" Dec 13 14:29:56.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:56.761887 systemd[1]: Started docker.service. Dec 13 14:29:56.764392 env[2018]: time="2024-12-13T14:29:56.764352467Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:29:58.560710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 14:29:58.561034 systemd[1]: Stopped kubelet.service. Dec 13 14:29:58.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:58.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:58.563017 systemd[1]: Starting kubelet.service... Dec 13 14:29:58.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:58.716707 systemd[1]: Started kubelet.service. Dec 13 14:29:59.267352 kubelet[2142]: E1213 14:29:59.267293 2142 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:29:59.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:29:59.269013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:29:59.269217 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:30:01.923825 env[1521]: time="2024-12-13T14:30:01.923774157Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:30:02.715666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1581490211.mount: Deactivated successfully. Dec 13 14:30:04.715698 env[1521]: time="2024-12-13T14:30:04.715640515Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:04.722126 env[1521]: time="2024-12-13T14:30:04.722085797Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:04.726331 env[1521]: time="2024-12-13T14:30:04.726298250Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:04.730059 env[1521]: time="2024-12-13T14:30:04.730029497Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:04.730682 env[1521]: time="2024-12-13T14:30:04.730645804Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 14:30:04.740701 env[1521]: time="2024-12-13T14:30:04.740669830Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:30:06.846142 env[1521]: time="2024-12-13T14:30:06.846085649Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:06.850617 env[1521]: time="2024-12-13T14:30:06.850579503Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:06.854960 env[1521]: time="2024-12-13T14:30:06.854929855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:06.858451 env[1521]: time="2024-12-13T14:30:06.858421697Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:06.859109 env[1521]: time="2024-12-13T14:30:06.859075404Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 14:30:06.870105 env[1521]: time="2024-12-13T14:30:06.870061835Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:30:08.099981 env[1521]: time="2024-12-13T14:30:08.099922737Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:08.107258 env[1521]: time="2024-12-13T14:30:08.107214120Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:08.110752 env[1521]: time="2024-12-13T14:30:08.110705559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:08.113491 env[1521]: time="2024-12-13T14:30:08.113455591Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:08.114169 env[1521]: time="2024-12-13T14:30:08.114136298Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 14:30:08.124706 env[1521]: time="2024-12-13T14:30:08.124677318Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:30:09.310752 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 14:30:09.311009 systemd[1]: Stopped kubelet.service. Dec 13 14:30:09.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:09.315411 kernel: kauditd_printk_skb: 88 callbacks suppressed Dec 13 14:30:09.315490 kernel: audit: type=1130 audit(1734100209.310:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:09.325691 systemd[1]: Starting kubelet.service... Dec 13 14:30:09.339733 kernel: audit: type=1131 audit(1734100209.310:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:09.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:09.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:09.468820 systemd[1]: Started kubelet.service. Dec 13 14:30:09.482798 kernel: audit: type=1130 audit(1734100209.468:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:09.908895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1779770503.mount: Deactivated successfully. Dec 13 14:30:09.944118 kubelet[2179]: E1213 14:30:09.944059 2179 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:30:09.945894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:30:09.946092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:30:09.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:30:09.958746 kernel: audit: type=1131 audit(1734100209.945:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:30:10.561732 env[1521]: time="2024-12-13T14:30:10.561678484Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:10.624380 env[1521]: time="2024-12-13T14:30:10.624309557Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:10.629323 env[1521]: time="2024-12-13T14:30:10.629272910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:10.632782 env[1521]: time="2024-12-13T14:30:10.632747848Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:10.633198 env[1521]: time="2024-12-13T14:30:10.633167352Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:30:10.642740 env[1521]: time="2024-12-13T14:30:10.642705054Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:30:11.201084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3769239185.mount: Deactivated successfully. Dec 13 14:30:12.472518 env[1521]: time="2024-12-13T14:30:12.472452975Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:12.481624 env[1521]: time="2024-12-13T14:30:12.481580068Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:12.514859 env[1521]: time="2024-12-13T14:30:12.514787507Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:12.520851 env[1521]: time="2024-12-13T14:30:12.520799368Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:12.521750 env[1521]: time="2024-12-13T14:30:12.521698977Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:30:12.531861 env[1521]: time="2024-12-13T14:30:12.531822581Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:30:13.200331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233845987.mount: Deactivated successfully. Dec 13 14:30:13.216624 env[1521]: time="2024-12-13T14:30:13.216577511Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:13.222138 env[1521]: time="2024-12-13T14:30:13.222097766Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:13.225577 env[1521]: time="2024-12-13T14:30:13.225543700Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:13.229877 env[1521]: time="2024-12-13T14:30:13.229847143Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:13.230347 env[1521]: time="2024-12-13T14:30:13.230312747Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 14:30:13.239756 env[1521]: time="2024-12-13T14:30:13.239727841Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:30:13.772083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4223370842.mount: Deactivated successfully. Dec 13 14:30:16.371275 env[1521]: time="2024-12-13T14:30:16.371225660Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:16.377897 env[1521]: time="2024-12-13T14:30:16.377849721Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:16.383476 env[1521]: time="2024-12-13T14:30:16.383443572Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:16.387541 env[1521]: time="2024-12-13T14:30:16.387513110Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:16.388224 env[1521]: time="2024-12-13T14:30:16.388191416Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 14:30:19.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:19.380539 systemd[1]: Stopped kubelet.service. Dec 13 14:30:19.386078 systemd[1]: Starting kubelet.service... Dec 13 14:30:19.396130 kernel: audit: type=1130 audit(1734100219.380:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:19.396202 kernel: audit: type=1131 audit(1734100219.383:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:19.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:19.408377 systemd[1]: Reloading. Dec 13 14:30:19.557226 /usr/lib/systemd/system-generators/torcx-generator[2300]: time="2024-12-13T14:30:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:30:19.557694 /usr/lib/systemd/system-generators/torcx-generator[2300]: time="2024-12-13T14:30:19Z" level=info msg="torcx already run" Dec 13 14:30:19.638605 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:30:19.638624 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:30:19.656986 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:30:19.748148 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:30:19.748432 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:30:19.748918 systemd[1]: Stopped kubelet.service. Dec 13 14:30:19.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:30:19.751239 systemd[1]: Starting kubelet.service... Dec 13 14:30:19.763875 kernel: audit: type=1130 audit(1734100219.748:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:30:20.033646 systemd[1]: Started kubelet.service. Dec 13 14:30:20.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.050741 kernel: audit: type=1130 audit(1734100220.033:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:20.092090 kubelet[2367]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:30:20.092090 kubelet[2367]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:30:20.092090 kubelet[2367]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:30:20.092603 kubelet[2367]: I1213 14:30:20.092179 2367 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:30:20.598702 kubelet[2367]: I1213 14:30:20.598663 2367 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:30:20.598702 kubelet[2367]: I1213 14:30:20.598692 2367 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:30:20.599013 kubelet[2367]: I1213 14:30:20.598991 2367 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:30:20.783275 kubelet[2367]: I1213 14:30:20.783233 2367 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:30:20.791603 kubelet[2367]: E1213 14:30:20.791576 2367 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.17:6443: connect: connection refused Dec 13 14:30:20.809134 kubelet[2367]: I1213 14:30:20.809102 2367 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:30:20.809531 kubelet[2367]: I1213 14:30:20.809507 2367 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:30:20.809712 kubelet[2367]: I1213 14:30:20.809692 2367 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:30:20.809875 kubelet[2367]: I1213 14:30:20.809730 2367 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:30:20.809875 kubelet[2367]: I1213 14:30:20.809746 2367 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:30:20.809875 kubelet[2367]: I1213 14:30:20.809867 2367 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:30:20.810036 kubelet[2367]: I1213 14:30:20.809974 2367 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:30:20.810036 kubelet[2367]: I1213 14:30:20.809992 2367 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:30:20.810036 kubelet[2367]: I1213 14:30:20.810021 2367 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:30:20.810143 kubelet[2367]: I1213 14:30:20.810053 2367 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:30:20.811466 kubelet[2367]: W1213 14:30:20.811280 2367 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Dec 13 14:30:20.811466 kubelet[2367]: E1213 14:30:20.811349 2367 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Dec 13 14:30:20.811466 kubelet[2367]: W1213 14:30:20.811440 2367 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-e445ccd8ad&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Dec 13 14:30:20.811653 kubelet[2367]: E1213 14:30:20.811498 2367 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-e445ccd8ad&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Dec 13 14:30:20.811653 kubelet[2367]: I1213 14:30:20.811573 2367 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:30:20.821011 kubelet[2367]: I1213 14:30:20.820984 2367 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:30:20.821087 kubelet[2367]: W1213 14:30:20.821051 2367 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:30:20.821890 kubelet[2367]: I1213 14:30:20.821869 2367 server.go:1256] "Started kubelet" Dec 13 14:30:20.822261 kubelet[2367]: I1213 14:30:20.822227 2367 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:30:20.823220 kubelet[2367]: I1213 14:30:20.823196 2367 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:30:20.825407 kubelet[2367]: I1213 14:30:20.825389 2367 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:30:20.824000 audit[2367]: AVC avc: denied { mac_admin } for pid=2367 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:30:20.825885 kubelet[2367]: I1213 14:30:20.825873 2367 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:30:20.826811 kubelet[2367]: I1213 14:30:20.826795 2367 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 14:30:20.826937 kubelet[2367]: I1213 14:30:20.826924 2367 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 14:30:20.827083 kubelet[2367]: I1213 14:30:20.827063 2367 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:30:20.833972 kubelet[2367]: I1213 14:30:20.833955 2367 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:30:20.824000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:30:20.839000 kubelet[2367]: E1213 14:30:20.838984 2367 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-e445ccd8ad.1810c2f693033550 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-e445ccd8ad,UID:ci-3510.3.6-a-e445ccd8ad,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-e445ccd8ad,},FirstTimestamp:2024-12-13 14:30:20.821845328 +0000 UTC m=+0.781883727,LastTimestamp:2024-12-13 14:30:20.821845328 +0000 UTC m=+0.781883727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-e445ccd8ad,}" Dec 13 14:30:20.844420 kernel: audit: type=1400 audit(1734100220.824:216): avc: denied { mac_admin } for pid=2367 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:30:20.844494 kernel: audit: type=1401 audit(1734100220.824:216): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:30:20.844523 kernel: audit: type=1300 audit(1734100220.824:216): arch=c000003e syscall=188 success=no exit=-22 a0=c000ab27e0 a1=c000981740 a2=c000ab27b0 a3=25 items=0 ppid=1 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.824000 audit[2367]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ab27e0 a1=c000981740 a2=c000ab27b0 a3=25 items=0 ppid=1 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.847226 kubelet[2367]: E1213 14:30:20.847214 2367 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:30:20.850540 kubelet[2367]: I1213 14:30:20.849489 2367 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:30:20.851302 kubelet[2367]: I1213 14:30:20.851287 2367 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:30:20.851811 kubelet[2367]: E1213 14:30:20.851798 2367 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-e445ccd8ad?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="200ms" Dec 13 14:30:20.853276 kubelet[2367]: I1213 14:30:20.853264 2367 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:30:20.853360 kubelet[2367]: I1213 14:30:20.853352 2367 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:30:20.853464 kubelet[2367]: I1213 14:30:20.853452 2367 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:30:20.824000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:30:20.881293 kernel: audit: type=1327 audit(1734100220.824:216): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:30:20.825000 audit[2367]: AVC avc: denied { mac_admin } for pid=2367 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:30:20.894974 kubelet[2367]: W1213 14:30:20.894919 2367 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Dec 13 14:30:20.895135 kubelet[2367]: E1213 14:30:20.895121 2367 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Dec 13 14:30:20.896732 kernel: audit: type=1400 audit(1734100220.825:217): avc: denied { mac_admin } for pid=2367 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:30:20.825000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:30:20.825000 audit[2367]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b120c0 a1=c000981758 a2=c000ab2870 a3=25 items=0 ppid=1 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.825000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:30:20.825000 audit[2377]: NETFILTER_CFG table=mangle:29 family=2 entries=2 op=nft_register_chain pid=2377 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:20.825000 audit[2377]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc34aa5670 a2=0 a3=7ffc34aa565c items=0 ppid=2367 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 14:30:20.825000 audit[2378]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=2378 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:20.825000 audit[2378]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2d77e320 a2=0 a3=7ffc2d77e30c items=0 ppid=2367 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 14:30:20.868000 audit[2380]: NETFILTER_CFG table=filter:31 family=2 entries=2 op=nft_register_chain pid=2380 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:20.868000 audit[2380]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc52bc5a40 a2=0 a3=7ffc52bc5a2c items=0 ppid=2367 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.868000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:30:20.881000 audit[2382]: NETFILTER_CFG table=filter:32 family=2 entries=2 op=nft_register_chain pid=2382 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:20.881000 audit[2382]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffeae3deb10 a2=0 a3=7ffeae3deafc items=0 ppid=2367 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.881000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:30:20.905750 kernel: audit: type=1401 audit(1734100220.825:217): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:30:20.925274 kubelet[2367]: I1213 14:30:20.925246 2367 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:30:20.924000 audit[2388]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2388 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:20.924000 audit[2388]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffdd5948890 a2=0 a3=7ffdd594887c items=0 ppid=2367 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.924000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 13 14:30:20.926000 audit[2391]: NETFILTER_CFG table=mangle:34 family=2 entries=1 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:20.926000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc026d5240 a2=0 a3=7ffc026d522c items=0 ppid=2367 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.926000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 14:30:20.927000 audit[2390]: NETFILTER_CFG table=mangle:35 family=10 entries=2 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:20.927000 audit[2390]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc27814c80 a2=0 a3=7ffc27814c6c items=0 ppid=2367 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.927000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 14:30:20.928581 kubelet[2367]: I1213 14:30:20.928566 2367 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:30:20.928705 kubelet[2367]: I1213 14:30:20.928695 2367 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:30:20.928855 kubelet[2367]: I1213 14:30:20.928842 2367 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:30:20.929020 kubelet[2367]: E1213 14:30:20.929009 2367 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:30:20.928000 audit[2394]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:20.928000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0fda0b70 a2=0 a3=7ffe0fda0b5c items=0 ppid=2367 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.928000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 14:30:20.929000 audit[2395]: NETFILTER_CFG table=mangle:37 family=10 entries=1 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:20.929000 audit[2395]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe00610cf0 a2=0 a3=7ffe00610cdc items=0 ppid=2367 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.929000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 14:30:20.931241 kubelet[2367]: W1213 14:30:20.931196 2367 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Dec 13 14:30:20.930000 audit[2396]: NETFILTER_CFG table=nat:38 family=10 entries=2 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:20.930000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffebe13bf40 a2=0 a3=7ffebe13bf2c items=0 ppid=2367 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.930000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 14:30:20.931790 kubelet[2367]: E1213 14:30:20.931726 2367 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Dec 13 14:30:20.931000 audit[2397]: NETFILTER_CFG table=filter:39 family=10 entries=2 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:20.931000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd0eb86c40 a2=0 a3=7ffd0eb86c2c items=0 ppid=2367 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.931000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 14:30:20.933955 kubelet[2367]: I1213 14:30:20.933940 2367 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:30:20.933000 audit[2398]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:20.933000 audit[2398]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc984d3880 a2=0 a3=7ffc984d386c items=0 ppid=2367 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.933000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 14:30:20.934746 kubelet[2367]: I1213 14:30:20.934531 2367 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:30:20.934746 kubelet[2367]: I1213 14:30:20.934554 2367 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:30:20.935559 kubelet[2367]: I1213 14:30:20.935540 2367 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:20.935879 kubelet[2367]: E1213 14:30:20.935861 2367 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:20.938522 kubelet[2367]: I1213 14:30:20.938506 2367 policy_none.go:49] "None policy: Start" Dec 13 14:30:20.939046 kubelet[2367]: I1213 14:30:20.939028 2367 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:30:20.939127 kubelet[2367]: I1213 14:30:20.939053 2367 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:30:20.946737 kubelet[2367]: I1213 14:30:20.946704 2367 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:30:20.946000 audit[2367]: AVC avc: denied { mac_admin } for pid=2367 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:30:20.946000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:30:20.946000 audit[2367]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d54de0 a1=c000de2030 a2=c000d54db0 a3=25 items=0 ppid=1 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:20.946000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:30:20.947357 kubelet[2367]: I1213 14:30:20.947344 2367 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 14:30:20.948105 kubelet[2367]: I1213 14:30:20.948087 2367 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:30:20.949555 kubelet[2367]: E1213 14:30:20.949536 2367 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.6-a-e445ccd8ad\" not found" Dec 13 14:30:21.030210 kubelet[2367]: I1213 14:30:21.030158 2367 topology_manager.go:215] "Topology Admit Handler" podUID="47bf02b416ed2351bf10c0b807bd8cb9" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:21.031960 kubelet[2367]: I1213 14:30:21.031929 2367 topology_manager.go:215] "Topology Admit Handler" podUID="386db16333afdbbf74c80fa7d0356d8a" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:21.033637 kubelet[2367]: I1213 14:30:21.033613 2367 topology_manager.go:215] "Topology Admit Handler" podUID="cbe5c41c78ba881c803767dc44a5f460" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:21.052689 kubelet[2367]: I1213 14:30:21.052669 2367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47bf02b416ed2351bf10c0b807bd8cb9-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-e445ccd8ad\" (UID: \"47bf02b416ed2351bf10c0b807bd8cb9\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:21.052888 kubelet[2367]: I1213 14:30:21.052873 2367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/386db16333afdbbf74c80fa7d0356d8a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-e445ccd8ad\" (UID: \"386db16333afdbbf74c80fa7d0356d8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:21.053027 kubelet[2367]: I1213 14:30:21.053010 2367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbe5c41c78ba881c803767dc44a5f460-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-e445ccd8ad\" (UID: \"cbe5c41c78ba881c803767dc44a5f460\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:21.053108 kubelet[2367]: I1213 14:30:21.053042 2367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47bf02b416ed2351bf10c0b807bd8cb9-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-e445ccd8ad\" (UID: \"47bf02b416ed2351bf10c0b807bd8cb9\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:21.053108 kubelet[2367]: I1213 14:30:21.053099 2367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47bf02b416ed2351bf10c0b807bd8cb9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-e445ccd8ad\" (UID: \"47bf02b416ed2351bf10c0b807bd8cb9\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:21.053198 kubelet[2367]: I1213 14:30:21.053142 2367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/386db16333afdbbf74c80fa7d0356d8a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-e445ccd8ad\" (UID: \"386db16333afdbbf74c80fa7d0356d8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:21.053198 kubelet[2367]: I1213 14:30:21.053173 2367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/386db16333afdbbf74c80fa7d0356d8a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-e445ccd8ad\" (UID: \"386db16333afdbbf74c80fa7d0356d8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:21.053280 kubelet[2367]: I1213 14:30:21.053202 2367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/386db16333afdbbf74c80fa7d0356d8a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-e445ccd8ad\" (UID: \"386db16333afdbbf74c80fa7d0356d8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:21.053280 kubelet[2367]: I1213 14:30:21.053273 2367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/386db16333afdbbf74c80fa7d0356d8a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-e445ccd8ad\" (UID: \"386db16333afdbbf74c80fa7d0356d8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:21.053366 kubelet[2367]: E1213 14:30:21.052884 2367 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-e445ccd8ad?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="400ms" Dec 13 14:30:21.137921 kubelet[2367]: I1213 14:30:21.137812 2367 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:21.140435 kubelet[2367]: E1213 14:30:21.140065 2367 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:21.338649 env[1521]: time="2024-12-13T14:30:21.338602380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-e445ccd8ad,Uid:47bf02b416ed2351bf10c0b807bd8cb9,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:21.341237 env[1521]: time="2024-12-13T14:30:21.341192501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-e445ccd8ad,Uid:386db16333afdbbf74c80fa7d0356d8a,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:21.343563 env[1521]: time="2024-12-13T14:30:21.343529920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-e445ccd8ad,Uid:cbe5c41c78ba881c803767dc44a5f460,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:21.453976 kubelet[2367]: E1213 14:30:21.453875 2367 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-e445ccd8ad?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="800ms" Dec 13 14:30:21.542096 kubelet[2367]: I1213 14:30:21.542057 2367 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:21.542637 kubelet[2367]: E1213 14:30:21.542607 2367 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:21.962514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3489873984.mount: Deactivated successfully. Dec 13 14:30:21.963939 kubelet[2367]: W1213 14:30:21.963885 2367 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Dec 13 14:30:21.964057 kubelet[2367]: E1213 14:30:21.963952 2367 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Dec 13 14:30:21.997127 env[1521]: time="2024-12-13T14:30:21.997072945Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:22.000539 env[1521]: time="2024-12-13T14:30:22.000487773Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:22.010936 env[1521]: time="2024-12-13T14:30:22.010878456Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:22.016782 env[1521]: time="2024-12-13T14:30:22.016733502Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:22.020764 env[1521]: time="2024-12-13T14:30:22.020708934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:22.026056 env[1521]: time="2024-12-13T14:30:22.026017576Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:22.028795 env[1521]: time="2024-12-13T14:30:22.028762598Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:22.033386 env[1521]: time="2024-12-13T14:30:22.033347234Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:22.037988 env[1521]: time="2024-12-13T14:30:22.037953371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:22.042730 env[1521]: time="2024-12-13T14:30:22.042681809Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:22.045911 env[1521]: time="2024-12-13T14:30:22.045878134Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:22.050407 env[1521]: time="2024-12-13T14:30:22.050372670Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:22.141668 env[1521]: time="2024-12-13T14:30:22.141581595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:22.141668 env[1521]: time="2024-12-13T14:30:22.141644996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:22.141944 env[1521]: time="2024-12-13T14:30:22.141901998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:22.142250 env[1521]: time="2024-12-13T14:30:22.142184600Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e4dd64c88087fc37c41ca8e7b0a5d20540a0fbd7fe4d3d509844824f46b567d pid=2411 runtime=io.containerd.runc.v2 Dec 13 14:30:22.147804 env[1521]: time="2024-12-13T14:30:22.147274640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:22.147804 env[1521]: time="2024-12-13T14:30:22.147334041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:22.147804 env[1521]: time="2024-12-13T14:30:22.147363941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:22.147804 env[1521]: time="2024-12-13T14:30:22.147501142Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/af455fbd7e225570ec6cd1e42f31811ef68b0b4444795fd6cf84aaa1a0318114 pid=2419 runtime=io.containerd.runc.v2 Dec 13 14:30:22.164671 env[1521]: time="2024-12-13T14:30:22.164485977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:22.164671 env[1521]: time="2024-12-13T14:30:22.164527778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:22.164671 env[1521]: time="2024-12-13T14:30:22.164543278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:22.165183 env[1521]: time="2024-12-13T14:30:22.165094182Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7aa4152126300ce98451553053c247f66c7a4324f9136f198397e9bca7d5ed4 pid=2449 runtime=io.containerd.runc.v2 Dec 13 14:30:22.208626 kubelet[2367]: W1213 14:30:22.208034 2367 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Dec 13 14:30:22.208626 kubelet[2367]: E1213 14:30:22.208081 2367 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Dec 13 14:30:22.227070 kubelet[2367]: W1213 14:30:22.225678 2367 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Dec 13 14:30:22.227070 kubelet[2367]: E1213 14:30:22.225737 2367 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Dec 13 14:30:22.249901 env[1521]: time="2024-12-13T14:30:22.249856456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-e445ccd8ad,Uid:cbe5c41c78ba881c803767dc44a5f460,Namespace:kube-system,Attempt:0,} returns sandbox id \"af455fbd7e225570ec6cd1e42f31811ef68b0b4444795fd6cf84aaa1a0318114\"" Dec 13 14:30:22.256216 kubelet[2367]: E1213 14:30:22.256186 2367 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-e445ccd8ad?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="1.6s" Dec 13 14:30:22.263290 env[1521]: time="2024-12-13T14:30:22.263250763Z" level=info msg="CreateContainer within sandbox \"af455fbd7e225570ec6cd1e42f31811ef68b0b4444795fd6cf84aaa1a0318114\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:30:22.265303 env[1521]: time="2024-12-13T14:30:22.265257979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-e445ccd8ad,Uid:47bf02b416ed2351bf10c0b807bd8cb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e4dd64c88087fc37c41ca8e7b0a5d20540a0fbd7fe4d3d509844824f46b567d\"" Dec 13 14:30:22.269043 env[1521]: time="2024-12-13T14:30:22.269014709Z" level=info msg="CreateContainer within sandbox \"6e4dd64c88087fc37c41ca8e7b0a5d20540a0fbd7fe4d3d509844824f46b567d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:30:22.282512 env[1521]: time="2024-12-13T14:30:22.282471216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-e445ccd8ad,Uid:386db16333afdbbf74c80fa7d0356d8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7aa4152126300ce98451553053c247f66c7a4324f9136f198397e9bca7d5ed4\"" Dec 13 14:30:22.284747 kubelet[2367]: W1213 14:30:22.284668 2367 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-e445ccd8ad&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Dec 13 14:30:22.284747 kubelet[2367]: E1213 14:30:22.284744 2367 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-e445ccd8ad&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Dec 13 14:30:22.285693 env[1521]: time="2024-12-13T14:30:22.285661441Z" level=info msg="CreateContainer within sandbox \"c7aa4152126300ce98451553053c247f66c7a4324f9136f198397e9bca7d5ed4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:30:22.324461 env[1521]: time="2024-12-13T14:30:22.324415749Z" level=info msg="CreateContainer within sandbox \"af455fbd7e225570ec6cd1e42f31811ef68b0b4444795fd6cf84aaa1a0318114\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3c731beaf9f0f367dccd2069fe0f41bcd8d91926682a63aa7ebbff98d15f84bf\"" Dec 13 14:30:22.325263 env[1521]: time="2024-12-13T14:30:22.325232056Z" level=info msg="StartContainer for \"3c731beaf9f0f367dccd2069fe0f41bcd8d91926682a63aa7ebbff98d15f84bf\"" Dec 13 14:30:22.326632 kubelet[2367]: E1213 14:30:22.326604 2367 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-e445ccd8ad.1810c2f693033550 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-e445ccd8ad,UID:ci-3510.3.6-a-e445ccd8ad,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-e445ccd8ad,},FirstTimestamp:2024-12-13 14:30:20.821845328 +0000 UTC m=+0.781883727,LastTimestamp:2024-12-13 14:30:20.821845328 +0000 UTC m=+0.781883727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-e445ccd8ad,}" Dec 13 14:30:22.336851 env[1521]: time="2024-12-13T14:30:22.336810648Z" level=info msg="CreateContainer within sandbox \"6e4dd64c88087fc37c41ca8e7b0a5d20540a0fbd7fe4d3d509844824f46b567d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"87041fbe6c648931cb7ba722417acae3677f0b8140f0010fd1c9c3039fccee1e\"" Dec 13 14:30:22.337518 env[1521]: time="2024-12-13T14:30:22.337482653Z" level=info msg="StartContainer for \"87041fbe6c648931cb7ba722417acae3677f0b8140f0010fd1c9c3039fccee1e\"" Dec 13 14:30:22.340973 env[1521]: time="2024-12-13T14:30:22.340926481Z" level=info msg="CreateContainer within sandbox \"c7aa4152126300ce98451553053c247f66c7a4324f9136f198397e9bca7d5ed4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e08bee9f497eafbea15ed9485462c9e136ae6144dba1c9e552b5cdbaa99e4bca\"" Dec 13 14:30:22.341985 env[1521]: time="2024-12-13T14:30:22.341957389Z" level=info msg="StartContainer for \"e08bee9f497eafbea15ed9485462c9e136ae6144dba1c9e552b5cdbaa99e4bca\"" Dec 13 14:30:22.361420 kubelet[2367]: I1213 14:30:22.358119 2367 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:22.361420 kubelet[2367]: E1213 14:30:22.358632 2367 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:22.456736 env[1521]: time="2024-12-13T14:30:22.456662701Z" level=info msg="StartContainer for \"3c731beaf9f0f367dccd2069fe0f41bcd8d91926682a63aa7ebbff98d15f84bf\" returns successfully" Dec 13 14:30:22.459205 env[1521]: time="2024-12-13T14:30:22.459166821Z" level=info msg="StartContainer for \"e08bee9f497eafbea15ed9485462c9e136ae6144dba1c9e552b5cdbaa99e4bca\" returns successfully" Dec 13 14:30:22.516010 env[1521]: time="2024-12-13T14:30:22.515894472Z" level=info msg="StartContainer for \"87041fbe6c648931cb7ba722417acae3677f0b8140f0010fd1c9c3039fccee1e\" returns successfully" Dec 13 14:30:23.961003 kubelet[2367]: I1213 14:30:23.960972 2367 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:24.631650 kubelet[2367]: E1213 14:30:24.631573 2367 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.6-a-e445ccd8ad\" not found" node="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:24.644453 kubelet[2367]: I1213 14:30:24.644420 2367 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:24.814135 kubelet[2367]: I1213 14:30:24.814104 2367 apiserver.go:52] "Watching apiserver" Dec 13 14:30:24.851757 kubelet[2367]: I1213 14:30:24.851695 2367 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:30:24.950744 kubelet[2367]: E1213 14:30:24.950220 2367 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-e445ccd8ad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:27.406732 systemd[1]: Reloading. Dec 13 14:30:27.459914 kubelet[2367]: W1213 14:30:27.459884 2367 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:30:27.487023 /usr/lib/systemd/system-generators/torcx-generator[2654]: time="2024-12-13T14:30:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:30:27.487467 /usr/lib/systemd/system-generators/torcx-generator[2654]: time="2024-12-13T14:30:27Z" level=info msg="torcx already run" Dec 13 14:30:27.590634 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:30:27.590653 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:30:27.609032 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:30:27.705117 systemd[1]: Stopping kubelet.service... Dec 13 14:30:27.718275 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:30:27.718975 systemd[1]: Stopped kubelet.service. Dec 13 14:30:27.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.721335 systemd[1]: Starting kubelet.service... Dec 13 14:30:27.736677 kernel: kauditd_printk_skb: 42 callbacks suppressed Dec 13 14:30:27.736768 kernel: audit: type=1131 audit(1734100227.718:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.844841 systemd[1]: Started kubelet.service. Dec 13 14:30:27.859750 kernel: audit: type=1130 audit(1734100227.844:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:27.922330 kubelet[2732]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:30:27.922330 kubelet[2732]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:30:27.922330 kubelet[2732]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:30:27.922330 kubelet[2732]: I1213 14:30:27.922149 2732 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:30:27.928904 kubelet[2732]: I1213 14:30:27.928881 2732 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:30:27.929054 kubelet[2732]: I1213 14:30:27.929041 2732 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:30:27.929411 kubelet[2732]: I1213 14:30:27.929390 2732 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:30:27.931027 kubelet[2732]: I1213 14:30:27.930999 2732 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:30:27.933534 kubelet[2732]: I1213 14:30:27.933516 2732 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:30:27.948576 kubelet[2732]: I1213 14:30:27.948549 2732 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:30:27.949118 kubelet[2732]: I1213 14:30:27.949097 2732 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:30:27.949392 kubelet[2732]: I1213 14:30:27.949371 2732 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:30:27.949539 kubelet[2732]: I1213 14:30:27.949406 2732 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:30:27.949539 kubelet[2732]: I1213 14:30:27.949421 2732 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:30:27.949539 kubelet[2732]: I1213 14:30:27.949467 2732 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:30:27.949697 kubelet[2732]: I1213 14:30:27.949563 2732 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:30:27.949697 kubelet[2732]: I1213 14:30:27.949579 2732 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:30:27.949697 kubelet[2732]: I1213 14:30:27.949614 2732 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:30:27.949697 kubelet[2732]: I1213 14:30:27.949634 2732 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:30:27.956172 kubelet[2732]: I1213 14:30:27.956101 2732 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:30:27.957366 kubelet[2732]: I1213 14:30:27.957328 2732 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:30:27.957758 kubelet[2732]: I1213 14:30:27.957745 2732 server.go:1256] "Started kubelet" Dec 13 14:30:27.976753 kernel: audit: type=1400 audit(1734100227.958:233): avc: denied { mac_admin } for pid=2732 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:30:27.958000 audit[2732]: AVC avc: denied { mac_admin } for pid=2732 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:30:27.976918 kubelet[2732]: I1213 14:30:27.959754 2732 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 14:30:27.976918 kubelet[2732]: I1213 14:30:27.959796 2732 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 14:30:27.976918 kubelet[2732]: I1213 14:30:27.959829 2732 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:30:27.976918 kubelet[2732]: I1213 14:30:27.965276 2732 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:30:27.976918 kubelet[2732]: I1213 14:30:27.966506 2732 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:30:27.976918 kubelet[2732]: I1213 14:30:27.967662 2732 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:30:27.976918 kubelet[2732]: I1213 14:30:27.967998 2732 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:30:27.976918 kubelet[2732]: I1213 14:30:27.969811 2732 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:30:27.976918 kubelet[2732]: I1213 14:30:27.970314 2732 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:30:27.976918 kubelet[2732]: I1213 14:30:27.970419 2732 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:30:27.976918 kubelet[2732]: I1213 14:30:27.975210 2732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:30:27.976918 kubelet[2732]: I1213 14:30:27.976315 2732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:30:27.976918 kubelet[2732]: I1213 14:30:27.976339 2732 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:30:27.976918 kubelet[2732]: I1213 14:30:27.976369 2732 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:30:27.976918 kubelet[2732]: E1213 14:30:27.976423 2732 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:30:27.981291 kubelet[2732]: I1213 14:30:27.981269 2732 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:30:27.981430 kubelet[2732]: I1213 14:30:27.981408 2732 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:30:27.958000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:30:27.996976 kernel: audit: type=1401 audit(1734100227.958:233): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:30:27.958000 audit[2732]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00041bb60 a1=c0006e2d08 a2=c00041bb00 a3=25 items=0 ppid=1 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:28.003017 kubelet[2732]: E1213 14:30:28.003001 2732 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:30:28.005353 kubelet[2732]: I1213 14:30:28.005338 2732 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:30:28.020736 kernel: audit: type=1300 audit(1734100227.958:233): arch=c000003e syscall=188 success=no exit=-22 a0=c00041bb60 a1=c0006e2d08 a2=c00041bb00 a3=25 items=0 ppid=1 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:27.958000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:30:28.040734 kernel: audit: type=1327 audit(1734100227.958:233): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:30:27.959000 audit[2732]: AVC avc: denied { mac_admin } for pid=2732 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:30:27.959000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:30:28.065411 kernel: audit: type=1400 audit(1734100227.959:234): avc: denied { mac_admin } for pid=2732 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:30:28.065567 kernel: audit: type=1401 audit(1734100227.959:234): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:30:27.959000 audit[2732]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0006dcd80 a1=c0006e2d20 a2=c00041bc80 a3=25 items=0 ppid=1 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:28.076489 kubelet[2732]: E1213 14:30:28.076470 2732 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:30:28.076673 kubelet[2732]: I1213 14:30:28.076660 2732 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:30:28.076751 kubelet[2732]: I1213 14:30:28.076744 2732 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:30:28.076818 kubelet[2732]: I1213 14:30:28.076812 2732 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:30:28.077017 kubelet[2732]: I1213 14:30:28.077005 2732 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:30:28.077097 kubelet[2732]: I1213 14:30:28.077090 2732 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:30:28.077147 kubelet[2732]: I1213 14:30:28.077142 2732 policy_none.go:49] "None policy: Start" Dec 13 14:30:28.077953 kubelet[2732]: I1213 14:30:28.077941 2732 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:30:28.078050 kubelet[2732]: I1213 14:30:28.078043 2732 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:30:28.078308 kubelet[2732]: I1213 14:30:28.078295 2732 state_mem.go:75] "Updated machine memory state" Dec 13 14:30:28.079466 kubelet[2732]: I1213 14:30:28.079451 2732 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:30:28.079598 kubelet[2732]: I1213 14:30:28.079586 2732 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 14:30:28.079841 kubelet[2732]: I1213 14:30:28.079829 2732 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:30:28.089689 kernel: audit: type=1300 audit(1734100227.959:234): arch=c000003e syscall=188 success=no exit=-22 a0=c0006dcd80 a1=c0006e2d20 a2=c00041bc80 a3=25 items=0 ppid=1 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:27.959000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:30:28.098017 kubelet[2732]: I1213 14:30:28.094401 2732 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:28.078000 audit[2732]: AVC avc: denied { mac_admin } for pid=2732 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:30:28.078000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:30:28.078000 audit[2732]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000dffd70 a1=c000796150 a2=c000dffd40 a3=25 items=0 ppid=1 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:28.078000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:30:28.108745 kernel: audit: type=1327 audit(1734100227.959:234): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:30:28.111601 kubelet[2732]: I1213 14:30:28.111583 2732 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:28.111775 kubelet[2732]: I1213 14:30:28.111766 2732 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:28.277740 kubelet[2732]: I1213 14:30:28.277598 2732 topology_manager.go:215] "Topology Admit Handler" podUID="47bf02b416ed2351bf10c0b807bd8cb9" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:28.277959 kubelet[2732]: I1213 14:30:28.277942 2732 topology_manager.go:215] "Topology Admit Handler" podUID="386db16333afdbbf74c80fa7d0356d8a" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:28.278134 kubelet[2732]: I1213 14:30:28.278122 2732 topology_manager.go:215] "Topology Admit Handler" podUID="cbe5c41c78ba881c803767dc44a5f460" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:28.288709 kubelet[2732]: W1213 14:30:28.288682 2732 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:30:28.291930 kubelet[2732]: W1213 14:30:28.291906 2732 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:30:28.292789 kubelet[2732]: W1213 14:30:28.292753 2732 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:30:28.292902 kubelet[2732]: E1213 14:30:28.292826 2732 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.6-a-e445ccd8ad\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:28.374930 kubelet[2732]: I1213 14:30:28.374894 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47bf02b416ed2351bf10c0b807bd8cb9-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-e445ccd8ad\" (UID: \"47bf02b416ed2351bf10c0b807bd8cb9\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:28.375120 kubelet[2732]: I1213 14:30:28.374945 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47bf02b416ed2351bf10c0b807bd8cb9-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-e445ccd8ad\" (UID: \"47bf02b416ed2351bf10c0b807bd8cb9\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:28.375120 kubelet[2732]: I1213 14:30:28.374977 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/386db16333afdbbf74c80fa7d0356d8a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-e445ccd8ad\" (UID: \"386db16333afdbbf74c80fa7d0356d8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:28.375120 kubelet[2732]: I1213 14:30:28.375016 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/386db16333afdbbf74c80fa7d0356d8a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-e445ccd8ad\" (UID: \"386db16333afdbbf74c80fa7d0356d8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:28.375120 kubelet[2732]: I1213 14:30:28.375047 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/386db16333afdbbf74c80fa7d0356d8a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-e445ccd8ad\" (UID: \"386db16333afdbbf74c80fa7d0356d8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:28.375120 kubelet[2732]: I1213 14:30:28.375077 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47bf02b416ed2351bf10c0b807bd8cb9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-e445ccd8ad\" (UID: \"47bf02b416ed2351bf10c0b807bd8cb9\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:28.375326 kubelet[2732]: I1213 14:30:28.375103 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/386db16333afdbbf74c80fa7d0356d8a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-e445ccd8ad\" (UID: \"386db16333afdbbf74c80fa7d0356d8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:28.375326 kubelet[2732]: I1213 14:30:28.375134 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/386db16333afdbbf74c80fa7d0356d8a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-e445ccd8ad\" (UID: \"386db16333afdbbf74c80fa7d0356d8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:28.375326 kubelet[2732]: I1213 14:30:28.375164 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbe5c41c78ba881c803767dc44a5f460-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-e445ccd8ad\" (UID: \"cbe5c41c78ba881c803767dc44a5f460\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:28.952045 kubelet[2732]: I1213 14:30:28.952013 2732 apiserver.go:52] "Watching apiserver" Dec 13 14:30:28.971015 kubelet[2732]: I1213 14:30:28.970984 2732 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:30:29.011436 kubelet[2732]: W1213 14:30:29.011412 2732 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:30:29.011694 kubelet[2732]: E1213 14:30:29.011679 2732 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-e445ccd8ad\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-e445ccd8ad" Dec 13 14:30:29.022152 kubelet[2732]: I1213 14:30:29.022124 2732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.6-a-e445ccd8ad" podStartSLOduration=1.022072753 podStartE2EDuration="1.022072753s" podCreationTimestamp="2024-12-13 14:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:30:29.021851651 +0000 UTC m=+1.169650906" watchObservedRunningTime="2024-12-13 14:30:29.022072753 +0000 UTC m=+1.169871908" Dec 13 14:30:29.031457 kubelet[2732]: I1213 14:30:29.031430 2732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.6-a-e445ccd8ad" podStartSLOduration=1.031390915 podStartE2EDuration="1.031390915s" podCreationTimestamp="2024-12-13 14:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:30:29.030498009 +0000 UTC m=+1.178297164" watchObservedRunningTime="2024-12-13 14:30:29.031390915 +0000 UTC m=+1.179190170" Dec 13 14:30:29.054179 kubelet[2732]: I1213 14:30:29.054147 2732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-e445ccd8ad" podStartSLOduration=2.054107469 podStartE2EDuration="2.054107469s" podCreationTimestamp="2024-12-13 14:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:30:29.039196168 +0000 UTC m=+1.186995323" watchObservedRunningTime="2024-12-13 14:30:29.054107469 +0000 UTC m=+1.201906724" Dec 13 14:30:33.288551 sudo[2008]: pam_unix(sudo:session): session closed for user root Dec 13 14:30:33.287000 audit[2008]: USER_END pid=2008 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:30:33.292846 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 13 14:30:33.292938 kernel: audit: type=1106 audit(1734100233.287:236): pid=2008 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:30:33.290000 audit[2008]: CRED_DISP pid=2008 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:30:33.324193 kernel: audit: type=1104 audit(1734100233.290:237): pid=2008 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:30:33.409313 sshd[2004]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:33.409000 audit[2004]: USER_END pid=2004 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:30:33.413082 systemd[1]: sshd@6-10.200.8.17:22-10.200.16.10:46494.service: Deactivated successfully. Dec 13 14:30:33.413915 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:30:33.427911 systemd-logind[1505]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:30:33.429396 systemd-logind[1505]: Removed session 9. Dec 13 14:30:33.409000 audit[2004]: CRED_DISP pid=2004 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:30:33.445806 kernel: audit: type=1106 audit(1734100233.409:238): pid=2004 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:30:33.445901 kernel: audit: type=1104 audit(1734100233.409:239): pid=2004 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:30:33.445933 kernel: audit: type=1131 audit(1734100233.412:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.17:22-10.200.16.10:46494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:33.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.17:22-10.200.16.10:46494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:30:41.639765 kubelet[2732]: I1213 14:30:41.639727 2732 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:30:41.640258 env[1521]: time="2024-12-13T14:30:41.640181178Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:30:41.640601 kubelet[2732]: I1213 14:30:41.640421 2732 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:30:42.253752 kubelet[2732]: I1213 14:30:42.253703 2732 topology_manager.go:215] "Topology Admit Handler" podUID="8a41698c-48d8-4ade-a04a-22921f848834" podNamespace="kube-system" podName="kube-proxy-nt46d" Dec 13 14:30:42.272688 kubelet[2732]: I1213 14:30:42.272653 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv72m\" (UniqueName: \"kubernetes.io/projected/8a41698c-48d8-4ade-a04a-22921f848834-kube-api-access-mv72m\") pod \"kube-proxy-nt46d\" (UID: \"8a41698c-48d8-4ade-a04a-22921f848834\") " pod="kube-system/kube-proxy-nt46d" Dec 13 14:30:42.272948 kubelet[2732]: I1213 14:30:42.272920 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a41698c-48d8-4ade-a04a-22921f848834-xtables-lock\") pod \"kube-proxy-nt46d\" (UID: \"8a41698c-48d8-4ade-a04a-22921f848834\") " pod="kube-system/kube-proxy-nt46d" Dec 13 14:30:42.273095 kubelet[2732]: I1213 14:30:42.273079 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a41698c-48d8-4ade-a04a-22921f848834-kube-proxy\") pod \"kube-proxy-nt46d\" (UID: \"8a41698c-48d8-4ade-a04a-22921f848834\") " pod="kube-system/kube-proxy-nt46d" Dec 13 14:30:42.273236 kubelet[2732]: I1213 14:30:42.273223 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a41698c-48d8-4ade-a04a-22921f848834-lib-modules\") pod \"kube-proxy-nt46d\" (UID: \"8a41698c-48d8-4ade-a04a-22921f848834\") " pod="kube-system/kube-proxy-nt46d" Dec 13 14:30:42.558426 env[1521]: time="2024-12-13T14:30:42.558300725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nt46d,Uid:8a41698c-48d8-4ade-a04a-22921f848834,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:42.607408 env[1521]: time="2024-12-13T14:30:42.602434052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:42.607408 env[1521]: time="2024-12-13T14:30:42.602745353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:42.607408 env[1521]: time="2024-12-13T14:30:42.602785053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:42.607408 env[1521]: time="2024-12-13T14:30:42.602942754Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/67c9d9b14737854a89090224d6d2bdd26fde1206d8baaf3add051eb6da8f152c pid=2816 runtime=io.containerd.runc.v2 Dec 13 14:30:42.719440 env[1521]: time="2024-12-13T14:30:42.719387452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nt46d,Uid:8a41698c-48d8-4ade-a04a-22921f848834,Namespace:kube-system,Attempt:0,} returns sandbox id \"67c9d9b14737854a89090224d6d2bdd26fde1206d8baaf3add051eb6da8f152c\"" Dec 13 14:30:42.722557 env[1521]: time="2024-12-13T14:30:42.722514268Z" level=info msg="CreateContainer within sandbox \"67c9d9b14737854a89090224d6d2bdd26fde1206d8baaf3add051eb6da8f152c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:30:42.790898 env[1521]: time="2024-12-13T14:30:42.790850718Z" level=info msg="CreateContainer within sandbox \"67c9d9b14737854a89090224d6d2bdd26fde1206d8baaf3add051eb6da8f152c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8263b5410261c00ff2db9fb219ab72a227d67b9615aaae99e7cffd0c84de191e\"" Dec 13 14:30:42.791825 env[1521]: time="2024-12-13T14:30:42.791791823Z" level=info msg="StartContainer for \"8263b5410261c00ff2db9fb219ab72a227d67b9615aaae99e7cffd0c84de191e\"" Dec 13 14:30:42.806740 kubelet[2732]: I1213 14:30:42.804847 2732 topology_manager.go:215] "Topology Admit Handler" podUID="5f03fbe1-48dd-4770-8e15-514e143fc2b6" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-48gwr" Dec 13 14:30:42.857928 env[1521]: time="2024-12-13T14:30:42.857871662Z" level=info msg="StartContainer for \"8263b5410261c00ff2db9fb219ab72a227d67b9615aaae99e7cffd0c84de191e\" returns successfully" Dec 13 14:30:42.877309 kubelet[2732]: I1213 14:30:42.877174 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5f03fbe1-48dd-4770-8e15-514e143fc2b6-var-lib-calico\") pod \"tigera-operator-c7ccbd65-48gwr\" (UID: \"5f03fbe1-48dd-4770-8e15-514e143fc2b6\") " pod="tigera-operator/tigera-operator-c7ccbd65-48gwr" Dec 13 14:30:42.877309 kubelet[2732]: I1213 14:30:42.877242 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gls8z\" (UniqueName: \"kubernetes.io/projected/5f03fbe1-48dd-4770-8e15-514e143fc2b6-kube-api-access-gls8z\") pod \"tigera-operator-c7ccbd65-48gwr\" (UID: \"5f03fbe1-48dd-4770-8e15-514e143fc2b6\") " pod="tigera-operator/tigera-operator-c7ccbd65-48gwr" Dec 13 14:30:42.916000 audit[2907]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2907 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:42.927731 kernel: audit: type=1325 audit(1734100242.916:241): table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2907 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:42.918000 audit[2908]: NETFILTER_CFG table=mangle:42 family=2 entries=1 op=nft_register_chain pid=2908 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:42.918000 audit[2908]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd596ebda0 a2=0 a3=7ffd596ebd8c items=0 ppid=2867 pid=2908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:42.955392 kernel: audit: type=1325 audit(1734100242.918:242): table=mangle:42 family=2 entries=1 op=nft_register_chain pid=2908 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:42.955474 kernel: audit: type=1300 audit(1734100242.918:242): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd596ebda0 a2=0 a3=7ffd596ebd8c items=0 ppid=2867 pid=2908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:42.955509 kernel: audit: type=1327 audit(1734100242.918:242): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:30:42.918000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:30:42.920000 audit[2909]: NETFILTER_CFG table=nat:43 family=2 entries=1 op=nft_register_chain pid=2909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:42.978139 kernel: audit: type=1325 audit(1734100242.920:243): table=nat:43 family=2 entries=1 op=nft_register_chain pid=2909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:42.920000 audit[2909]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce6fa04c0 a2=0 a3=7ffce6fa04ac items=0 ppid=2867 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:42.920000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:30:43.009510 kernel: audit: type=1300 audit(1734100242.920:243): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce6fa04c0 a2=0 a3=7ffce6fa04ac items=0 ppid=2867 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.009582 kernel: audit: type=1327 audit(1734100242.920:243): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:30:43.009609 kernel: audit: type=1325 audit(1734100242.921:244): table=filter:44 family=2 entries=1 op=nft_register_chain pid=2910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:42.921000 audit[2910]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:42.921000 audit[2910]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf2cfe3d0 a2=0 a3=7ffdf2cfe3bc items=0 ppid=2867 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:42.921000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:30:43.057674 kernel: audit: type=1300 audit(1734100242.921:244): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf2cfe3d0 a2=0 a3=7ffdf2cfe3bc items=0 ppid=2867 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.057793 kernel: audit: type=1327 audit(1734100242.921:244): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:30:42.916000 audit[2907]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff9b565190 a2=0 a3=7fff9b56517c items=0 ppid=2867 pid=2907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:42.916000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:30:42.928000 audit[2911]: NETFILTER_CFG table=nat:45 family=10 entries=1 op=nft_register_chain pid=2911 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:42.928000 audit[2911]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8d33a1c0 a2=0 a3=7ffc8d33a1ac items=0 ppid=2867 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:42.928000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:30:42.929000 audit[2912]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=2912 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:42.929000 audit[2912]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffae544b70 a2=0 a3=7fffae544b5c items=0 ppid=2867 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:42.929000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:30:43.014000 audit[2914]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.014000 audit[2914]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe2dc2f170 a2=0 a3=7ffe2dc2f15c items=0 ppid=2867 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.014000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 14:30:43.017000 audit[2916]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.017000 audit[2916]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff1b6cfde0 a2=0 a3=7fff1b6cfdcc items=0 ppid=2867 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.017000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 13 14:30:43.021000 audit[2919]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=2919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.021000 audit[2919]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd0e05e1d0 a2=0 a3=7ffd0e05e1bc items=0 ppid=2867 pid=2919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.021000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 13 14:30:43.023000 audit[2920]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.023000 audit[2920]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd45223fd0 a2=0 a3=7ffd45223fbc items=0 ppid=2867 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.023000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 14:30:43.026000 audit[2922]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.026000 audit[2922]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff987dc640 a2=0 a3=7fff987dc62c items=0 ppid=2867 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.026000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 14:30:43.028000 audit[2923]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.028000 audit[2923]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdac0d9990 a2=0 a3=7ffdac0d997c items=0 ppid=2867 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.028000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 14:30:43.031000 audit[2925]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.031000 audit[2925]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff4eebd950 a2=0 a3=7fff4eebd93c items=0 ppid=2867 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.031000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 14:30:43.034000 audit[2928]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.034000 audit[2928]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc31e654f0 a2=0 a3=7ffc31e654dc items=0 ppid=2867 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.034000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 13 14:30:43.035000 audit[2929]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=2929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.035000 audit[2929]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9c8ea0c0 a2=0 a3=7ffd9c8ea0ac items=0 ppid=2867 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.035000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 14:30:43.038000 audit[2931]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.038000 audit[2931]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff884251c0 a2=0 a3=7fff884251ac items=0 ppid=2867 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.038000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 14:30:43.039000 audit[2932]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=2932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.039000 audit[2932]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd61b4a00 a2=0 a3=7ffdd61b49ec items=0 ppid=2867 pid=2932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.039000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 14:30:43.043000 audit[2934]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=2934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.043000 audit[2934]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdb2045910 a2=0 a3=7ffdb20458fc items=0 ppid=2867 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.043000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:30:43.048000 audit[2937]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_rule pid=2937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.048000 audit[2937]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd87792110 a2=0 a3=7ffd877920fc items=0 ppid=2867 pid=2937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.048000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:30:43.052000 audit[2940]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=2940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.052000 audit[2940]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe8b563030 a2=0 a3=7ffe8b56301c items=0 ppid=2867 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.052000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 14:30:43.062000 audit[2941]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.062000 audit[2941]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc2ae932a0 a2=0 a3=7ffc2ae9328c items=0 ppid=2867 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.062000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 14:30:43.066000 audit[2943]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.066000 audit[2943]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc633d28b0 a2=0 a3=7ffc633d289c items=0 ppid=2867 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.066000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:30:43.069000 audit[2946]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_rule pid=2946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.069000 audit[2946]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe351a81d0 a2=0 a3=7ffe351a81bc items=0 ppid=2867 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:30:43.071000 audit[2947]: NETFILTER_CFG table=nat:64 family=2 entries=1 op=nft_register_chain pid=2947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.071000 audit[2947]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd15ae70f0 a2=0 a3=7ffd15ae70dc items=0 ppid=2867 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.071000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 14:30:43.074000 audit[2949]: NETFILTER_CFG table=nat:65 family=2 entries=1 op=nft_register_rule pid=2949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:30:43.074000 audit[2949]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd6f764880 a2=0 a3=7ffd6f76486c items=0 ppid=2867 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.074000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 14:30:43.109671 env[1521]: time="2024-12-13T14:30:43.109576742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-48gwr,Uid:5f03fbe1-48dd-4770-8e15-514e143fc2b6,Namespace:tigera-operator,Attempt:0,}" Dec 13 14:30:43.145341 env[1521]: time="2024-12-13T14:30:43.145283922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:43.145500 env[1521]: time="2024-12-13T14:30:43.145318922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:43.145500 env[1521]: time="2024-12-13T14:30:43.145332022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:43.145635 env[1521]: time="2024-12-13T14:30:43.145483623Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a954857af4b8ae6d34986d15ef6208556fad082614ac4225572597568641d0e pid=2965 runtime=io.containerd.runc.v2 Dec 13 14:30:43.206713 env[1521]: time="2024-12-13T14:30:43.205923627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-48gwr,Uid:5f03fbe1-48dd-4770-8e15-514e143fc2b6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6a954857af4b8ae6d34986d15ef6208556fad082614ac4225572597568641d0e\"" Dec 13 14:30:43.208659 env[1521]: time="2024-12-13T14:30:43.207418435Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 14:30:43.210000 audit[2955]: NETFILTER_CFG table=filter:66 family=2 entries=8 op=nft_register_rule pid=2955 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:30:43.210000 audit[2955]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffdf81710a0 a2=0 a3=7ffdf817108c items=0 ppid=2867 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.210000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:30:43.244000 audit[2955]: NETFILTER_CFG table=nat:67 family=2 entries=14 op=nft_register_chain pid=2955 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:30:43.244000 audit[2955]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffdf81710a0 a2=0 a3=7ffdf817108c items=0 ppid=2867 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.244000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:30:43.245000 audit[3001]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.245000 audit[3001]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc05627fe0 a2=0 a3=7ffc05627fcc items=0 ppid=2867 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.245000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 14:30:43.248000 audit[3003]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=3003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.248000 audit[3003]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffa96b7dd0 a2=0 a3=7fffa96b7dbc items=0 ppid=2867 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.248000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 13 14:30:43.252000 audit[3006]: NETFILTER_CFG table=filter:70 family=10 entries=2 op=nft_register_chain pid=3006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.252000 audit[3006]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe90a0a510 a2=0 a3=7ffe90a0a4fc items=0 ppid=2867 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.252000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 13 14:30:43.253000 audit[3007]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_chain pid=3007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.253000 audit[3007]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6df3e030 a2=0 a3=7ffe6df3e01c items=0 ppid=2867 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.253000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 14:30:43.256000 audit[3009]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=3009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.256000 audit[3009]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff4c6af930 a2=0 a3=7fff4c6af91c items=0 ppid=2867 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.256000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 14:30:43.257000 audit[3010]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.257000 audit[3010]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc45541a00 a2=0 a3=7ffc455419ec items=0 ppid=2867 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.257000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 14:30:43.259000 audit[3012]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.259000 audit[3012]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff199a5a00 a2=0 a3=7fff199a59ec items=0 ppid=2867 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.259000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 13 14:30:43.263000 audit[3015]: NETFILTER_CFG table=filter:75 family=10 entries=2 op=nft_register_chain pid=3015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.263000 audit[3015]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffecba652b0 a2=0 a3=7ffecba6529c items=0 ppid=2867 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 14:30:43.264000 audit[3016]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_chain pid=3016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.264000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff89cedf40 a2=0 a3=7fff89cedf2c items=0 ppid=2867 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.264000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 14:30:43.266000 audit[3018]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.266000 audit[3018]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc718d1d30 a2=0 a3=7ffc718d1d1c items=0 ppid=2867 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.266000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 14:30:43.267000 audit[3019]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_chain pid=3019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.267000 audit[3019]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff0457b30 a2=0 a3=7ffff0457b1c items=0 ppid=2867 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.267000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 14:30:43.270000 audit[3021]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_rule pid=3021 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.270000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe1807c800 a2=0 a3=7ffe1807c7ec items=0 ppid=2867 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.270000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:30:43.273000 audit[3024]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_rule pid=3024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.273000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe4352bb50 a2=0 a3=7ffe4352bb3c items=0 ppid=2867 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.273000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 14:30:43.277000 audit[3027]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.277000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff9d3b6ac0 a2=0 a3=7fff9d3b6aac items=0 ppid=2867 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.277000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 13 14:30:43.278000 audit[3028]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.278000 audit[3028]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc594885d0 a2=0 a3=7ffc594885bc items=0 ppid=2867 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.278000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 14:30:43.280000 audit[3030]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.280000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff1286e9e0 a2=0 a3=7fff1286e9cc items=0 ppid=2867 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.280000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:30:43.284000 audit[3033]: NETFILTER_CFG table=nat:84 family=10 entries=2 op=nft_register_chain pid=3033 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.284000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffea5621910 a2=0 a3=7ffea56218fc items=0 ppid=2867 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.284000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:30:43.285000 audit[3034]: NETFILTER_CFG table=nat:85 family=10 entries=1 op=nft_register_chain pid=3034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.285000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe400af470 a2=0 a3=7ffe400af45c items=0 ppid=2867 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.285000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 14:30:43.287000 audit[3036]: NETFILTER_CFG table=nat:86 family=10 entries=2 op=nft_register_chain pid=3036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.287000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff746437c0 a2=0 a3=7fff746437ac items=0 ppid=2867 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.287000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 14:30:43.290000 audit[3037]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.290000 audit[3037]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee4a6c0b0 a2=0 a3=7ffee4a6c09c items=0 ppid=2867 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.290000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 14:30:43.292000 audit[3039]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.292000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffca79afac0 a2=0 a3=7ffca79afaac items=0 ppid=2867 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.292000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:30:43.295000 audit[3042]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_rule pid=3042 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:30:43.295000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffedb418110 a2=0 a3=7ffedb4180fc items=0 ppid=2867 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.295000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:30:43.298000 audit[3044]: NETFILTER_CFG table=filter:90 family=10 entries=3 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 14:30:43.298000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffdb210b900 a2=0 a3=7ffdb210b8ec items=0 ppid=2867 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.298000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:30:43.299000 audit[3044]: NETFILTER_CFG table=nat:91 family=10 entries=7 op=nft_register_chain pid=3044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 14:30:43.299000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffdb210b900 a2=0 a3=7ffdb210b8ec items=0 ppid=2867 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:43.299000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:30:43.390400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3424401139.mount: Deactivated successfully. Dec 13 14:30:48.005333 kubelet[2732]: I1213 14:30:48.005290 2732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nt46d" podStartSLOduration=6.005252334 podStartE2EDuration="6.005252334s" podCreationTimestamp="2024-12-13 14:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:30:43.072835658 +0000 UTC m=+15.220634913" watchObservedRunningTime="2024-12-13 14:30:48.005252334 +0000 UTC m=+20.153051489" Dec 13 14:30:50.350598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430365508.mount: Deactivated successfully. Dec 13 14:30:51.031071 env[1521]: time="2024-12-13T14:30:51.031008635Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:51.034768 env[1521]: time="2024-12-13T14:30:51.034713551Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:51.037490 env[1521]: time="2024-12-13T14:30:51.037452963Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:51.039876 env[1521]: time="2024-12-13T14:30:51.039845573Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:51.040448 env[1521]: time="2024-12-13T14:30:51.040413475Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 14:30:51.043235 env[1521]: time="2024-12-13T14:30:51.043207488Z" level=info msg="CreateContainer within sandbox \"6a954857af4b8ae6d34986d15ef6208556fad082614ac4225572597568641d0e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 14:30:51.070637 env[1521]: time="2024-12-13T14:30:51.070599306Z" level=info msg="CreateContainer within sandbox \"6a954857af4b8ae6d34986d15ef6208556fad082614ac4225572597568641d0e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7dcfb5663471cdcf17a364935aff8792d16c0f2000510fe9303018e75364e209\"" Dec 13 14:30:51.071160 env[1521]: time="2024-12-13T14:30:51.071040608Z" level=info msg="StartContainer for \"7dcfb5663471cdcf17a364935aff8792d16c0f2000510fe9303018e75364e209\"" Dec 13 14:30:51.125185 env[1521]: time="2024-12-13T14:30:51.125140243Z" level=info msg="StartContainer for \"7dcfb5663471cdcf17a364935aff8792d16c0f2000510fe9303018e75364e209\" returns successfully" Dec 13 14:30:52.099673 kubelet[2732]: I1213 14:30:52.098106 2732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-48gwr" podStartSLOduration=2.264178812 podStartE2EDuration="10.098040757s" podCreationTimestamp="2024-12-13 14:30:42 +0000 UTC" firstStartedPulling="2024-12-13 14:30:43.206850832 +0000 UTC m=+15.354649987" lastFinishedPulling="2024-12-13 14:30:51.040712777 +0000 UTC m=+23.188511932" observedRunningTime="2024-12-13 14:30:52.097614155 +0000 UTC m=+24.245413310" watchObservedRunningTime="2024-12-13 14:30:52.098040757 +0000 UTC m=+24.245840012" Dec 13 14:30:54.075000 audit[3082]: NETFILTER_CFG table=filter:92 family=2 entries=15 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:30:54.080444 kernel: kauditd_printk_skb: 143 callbacks suppressed Dec 13 14:30:54.080531 kernel: audit: type=1325 audit(1734100254.075:292): table=filter:92 family=2 entries=15 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:30:54.075000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffca8f04540 a2=0 a3=7ffca8f0452c items=0 ppid=2867 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:54.110422 kernel: audit: type=1300 audit(1734100254.075:292): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffca8f04540 a2=0 a3=7ffca8f0452c items=0 ppid=2867 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:54.075000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:30:54.120097 kernel: audit: type=1327 audit(1734100254.075:292): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:30:54.111000 audit[3082]: NETFILTER_CFG table=nat:93 family=2 entries=12 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:30:54.130196 kernel: audit: type=1325 audit(1734100254.111:293): table=nat:93 family=2 entries=12 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:30:54.147765 kernel: audit: type=1300 audit(1734100254.111:293): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca8f04540 a2=0 a3=0 items=0 ppid=2867 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:54.111000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca8f04540 a2=0 a3=0 items=0 ppid=2867 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:54.111000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:30:54.166746 kernel: audit: type=1327 audit(1734100254.111:293): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:30:54.163000 audit[3084]: NETFILTER_CFG table=filter:94 family=2 entries=16 op=nft_register_rule pid=3084 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:30:54.183740 kernel: audit: type=1325 audit(1734100254.163:294): table=filter:94 family=2 entries=16 op=nft_register_rule pid=3084 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:30:54.163000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fffb82832a0 a2=0 a3=7fffb828328c items=0 ppid=2867 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:54.223742 kernel: audit: type=1300 audit(1734100254.163:294): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fffb82832a0 a2=0 a3=7fffb828328c items=0 ppid=2867 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:54.163000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:30:54.234738 kernel: audit: type=1327 audit(1734100254.163:294): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:30:54.234873 kernel: audit: type=1325 audit(1734100254.169:295): table=nat:95 family=2 entries=12 op=nft_register_rule pid=3084 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:30:54.169000 audit[3084]: NETFILTER_CFG table=nat:95 family=2 entries=12 op=nft_register_rule pid=3084 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:30:54.169000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb82832a0 a2=0 a3=0 items=0 ppid=2867 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:54.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:30:54.292376 kubelet[2732]: I1213 14:30:54.292333 2732 topology_manager.go:215] "Topology Admit Handler" podUID="17426ce8-ff42-43db-a006-156e2c1c9224" podNamespace="calico-system" podName="calico-typha-64649bd6d8-t5l6v" Dec 13 14:30:54.357899 kubelet[2732]: I1213 14:30:54.357779 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17426ce8-ff42-43db-a006-156e2c1c9224-tigera-ca-bundle\") pod \"calico-typha-64649bd6d8-t5l6v\" (UID: \"17426ce8-ff42-43db-a006-156e2c1c9224\") " pod="calico-system/calico-typha-64649bd6d8-t5l6v" Dec 13 14:30:54.358152 kubelet[2732]: I1213 14:30:54.358123 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqhkx\" (UniqueName: \"kubernetes.io/projected/17426ce8-ff42-43db-a006-156e2c1c9224-kube-api-access-wqhkx\") pod \"calico-typha-64649bd6d8-t5l6v\" (UID: \"17426ce8-ff42-43db-a006-156e2c1c9224\") " pod="calico-system/calico-typha-64649bd6d8-t5l6v" Dec 13 14:30:54.358350 kubelet[2732]: I1213 14:30:54.358339 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/17426ce8-ff42-43db-a006-156e2c1c9224-typha-certs\") pod \"calico-typha-64649bd6d8-t5l6v\" (UID: \"17426ce8-ff42-43db-a006-156e2c1c9224\") " pod="calico-system/calico-typha-64649bd6d8-t5l6v" Dec 13 14:30:54.453507 kubelet[2732]: I1213 14:30:54.453465 2732 topology_manager.go:215] "Topology Admit Handler" podUID="a562b0a7-137e-4e62-b42a-5ec8c7292df0" podNamespace="calico-system" podName="calico-node-rqnjh" Dec 13 14:30:54.562656 kubelet[2732]: I1213 14:30:54.562619 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a562b0a7-137e-4e62-b42a-5ec8c7292df0-node-certs\") pod \"calico-node-rqnjh\" (UID: \"a562b0a7-137e-4e62-b42a-5ec8c7292df0\") " pod="calico-system/calico-node-rqnjh" Dec 13 14:30:54.562863 kubelet[2732]: I1213 14:30:54.562692 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a562b0a7-137e-4e62-b42a-5ec8c7292df0-tigera-ca-bundle\") pod \"calico-node-rqnjh\" (UID: \"a562b0a7-137e-4e62-b42a-5ec8c7292df0\") " pod="calico-system/calico-node-rqnjh" Dec 13 14:30:54.562863 kubelet[2732]: I1213 14:30:54.562753 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a562b0a7-137e-4e62-b42a-5ec8c7292df0-var-run-calico\") pod \"calico-node-rqnjh\" (UID: \"a562b0a7-137e-4e62-b42a-5ec8c7292df0\") " pod="calico-system/calico-node-rqnjh" Dec 13 14:30:54.562863 kubelet[2732]: I1213 14:30:54.562784 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a562b0a7-137e-4e62-b42a-5ec8c7292df0-flexvol-driver-host\") pod \"calico-node-rqnjh\" (UID: \"a562b0a7-137e-4e62-b42a-5ec8c7292df0\") " pod="calico-system/calico-node-rqnjh" Dec 13 14:30:54.562863 kubelet[2732]: I1213 14:30:54.562821 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a562b0a7-137e-4e62-b42a-5ec8c7292df0-cni-net-dir\") pod \"calico-node-rqnjh\" (UID: \"a562b0a7-137e-4e62-b42a-5ec8c7292df0\") " pod="calico-system/calico-node-rqnjh" Dec 13 14:30:54.562863 kubelet[2732]: I1213 14:30:54.562852 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a562b0a7-137e-4e62-b42a-5ec8c7292df0-var-lib-calico\") pod \"calico-node-rqnjh\" (UID: \"a562b0a7-137e-4e62-b42a-5ec8c7292df0\") " pod="calico-system/calico-node-rqnjh" Dec 13 14:30:54.563075 kubelet[2732]: I1213 14:30:54.562884 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a562b0a7-137e-4e62-b42a-5ec8c7292df0-cni-log-dir\") pod \"calico-node-rqnjh\" (UID: \"a562b0a7-137e-4e62-b42a-5ec8c7292df0\") " pod="calico-system/calico-node-rqnjh" Dec 13 14:30:54.563075 kubelet[2732]: I1213 14:30:54.562925 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a562b0a7-137e-4e62-b42a-5ec8c7292df0-lib-modules\") pod \"calico-node-rqnjh\" (UID: \"a562b0a7-137e-4e62-b42a-5ec8c7292df0\") " pod="calico-system/calico-node-rqnjh" Dec 13 14:30:54.563075 kubelet[2732]: I1213 14:30:54.562957 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a562b0a7-137e-4e62-b42a-5ec8c7292df0-policysync\") pod \"calico-node-rqnjh\" (UID: \"a562b0a7-137e-4e62-b42a-5ec8c7292df0\") " pod="calico-system/calico-node-rqnjh" Dec 13 14:30:54.563075 kubelet[2732]: I1213 14:30:54.562995 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a562b0a7-137e-4e62-b42a-5ec8c7292df0-cni-bin-dir\") pod \"calico-node-rqnjh\" (UID: \"a562b0a7-137e-4e62-b42a-5ec8c7292df0\") " pod="calico-system/calico-node-rqnjh" Dec 13 14:30:54.563075 kubelet[2732]: I1213 14:30:54.563036 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkl96\" (UniqueName: \"kubernetes.io/projected/a562b0a7-137e-4e62-b42a-5ec8c7292df0-kube-api-access-rkl96\") pod \"calico-node-rqnjh\" (UID: \"a562b0a7-137e-4e62-b42a-5ec8c7292df0\") " pod="calico-system/calico-node-rqnjh" Dec 13 14:30:54.563274 kubelet[2732]: I1213 14:30:54.563074 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a562b0a7-137e-4e62-b42a-5ec8c7292df0-xtables-lock\") pod \"calico-node-rqnjh\" (UID: \"a562b0a7-137e-4e62-b42a-5ec8c7292df0\") " pod="calico-system/calico-node-rqnjh" Dec 13 14:30:54.581366 kubelet[2732]: I1213 14:30:54.581325 2732 topology_manager.go:215] "Topology Admit Handler" podUID="54fdb1bd-71c2-40c0-a409-956b5f88cc85" podNamespace="calico-system" podName="csi-node-driver-889kf" Dec 13 14:30:54.582239 kubelet[2732]: E1213 14:30:54.582194 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-889kf" podUID="54fdb1bd-71c2-40c0-a409-956b5f88cc85" Dec 13 14:30:54.596524 env[1521]: time="2024-12-13T14:30:54.596478453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64649bd6d8-t5l6v,Uid:17426ce8-ff42-43db-a006-156e2c1c9224,Namespace:calico-system,Attempt:0,}" Dec 13 14:30:54.645391 env[1521]: time="2024-12-13T14:30:54.644500951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:54.645743 env[1521]: time="2024-12-13T14:30:54.645682256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:54.645994 env[1521]: time="2024-12-13T14:30:54.645956357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:54.646524 env[1521]: time="2024-12-13T14:30:54.646487759Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d41eb0fd48e9a7192ba8518d20ebc4d44572f13becc4388f4104cbaa0f19a3f pid=3093 runtime=io.containerd.runc.v2 Dec 13 14:30:54.663985 kubelet[2732]: I1213 14:30:54.663949 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/54fdb1bd-71c2-40c0-a409-956b5f88cc85-registration-dir\") pod \"csi-node-driver-889kf\" (UID: \"54fdb1bd-71c2-40c0-a409-956b5f88cc85\") " pod="calico-system/csi-node-driver-889kf" Dec 13 14:30:54.664094 kubelet[2732]: I1213 14:30:54.664079 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66f6w\" (UniqueName: \"kubernetes.io/projected/54fdb1bd-71c2-40c0-a409-956b5f88cc85-kube-api-access-66f6w\") pod \"csi-node-driver-889kf\" (UID: \"54fdb1bd-71c2-40c0-a409-956b5f88cc85\") " pod="calico-system/csi-node-driver-889kf" Dec 13 14:30:54.664241 kubelet[2732]: I1213 14:30:54.664224 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54fdb1bd-71c2-40c0-a409-956b5f88cc85-kubelet-dir\") pod \"csi-node-driver-889kf\" (UID: \"54fdb1bd-71c2-40c0-a409-956b5f88cc85\") " pod="calico-system/csi-node-driver-889kf" Dec 13 14:30:54.664297 kubelet[2732]: I1213 14:30:54.664283 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/54fdb1bd-71c2-40c0-a409-956b5f88cc85-socket-dir\") pod \"csi-node-driver-889kf\" (UID: \"54fdb1bd-71c2-40c0-a409-956b5f88cc85\") " pod="calico-system/csi-node-driver-889kf" Dec 13 14:30:54.664371 kubelet[2732]: I1213 14:30:54.664358 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/54fdb1bd-71c2-40c0-a409-956b5f88cc85-varrun\") pod \"csi-node-driver-889kf\" (UID: \"54fdb1bd-71c2-40c0-a409-956b5f88cc85\") " pod="calico-system/csi-node-driver-889kf" Dec 13 14:30:54.685972 kubelet[2732]: E1213 14:30:54.685941 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.686092 kubelet[2732]: W1213 14:30:54.685979 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.686092 kubelet[2732]: E1213 14:30:54.686007 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.689961 kubelet[2732]: E1213 14:30:54.689925 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.690075 kubelet[2732]: W1213 14:30:54.690058 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.690255 kubelet[2732]: E1213 14:30:54.690242 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.690477 kubelet[2732]: E1213 14:30:54.690464 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.690563 kubelet[2732]: W1213 14:30:54.690551 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.690704 kubelet[2732]: E1213 14:30:54.690694 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.690947 kubelet[2732]: E1213 14:30:54.690934 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.691058 kubelet[2732]: W1213 14:30:54.691043 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.691237 kubelet[2732]: E1213 14:30:54.691225 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.691475 kubelet[2732]: E1213 14:30:54.691464 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.691556 kubelet[2732]: W1213 14:30:54.691546 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.691683 kubelet[2732]: E1213 14:30:54.691675 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.691866 kubelet[2732]: E1213 14:30:54.691858 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.691941 kubelet[2732]: W1213 14:30:54.691933 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.692065 kubelet[2732]: E1213 14:30:54.692058 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.692252 kubelet[2732]: E1213 14:30:54.692245 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.692322 kubelet[2732]: W1213 14:30:54.692314 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.692446 kubelet[2732]: E1213 14:30:54.692438 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.692607 kubelet[2732]: E1213 14:30:54.692600 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.692668 kubelet[2732]: W1213 14:30:54.692660 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.692755 kubelet[2732]: E1213 14:30:54.692746 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.693073 kubelet[2732]: E1213 14:30:54.693064 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.693144 kubelet[2732]: W1213 14:30:54.693135 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.693207 kubelet[2732]: E1213 14:30:54.693200 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.693419 kubelet[2732]: E1213 14:30:54.693411 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.693634 kubelet[2732]: W1213 14:30:54.693617 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.693774 kubelet[2732]: E1213 14:30:54.693761 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.709390 kubelet[2732]: E1213 14:30:54.709371 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.709532 kubelet[2732]: W1213 14:30:54.709516 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.709655 kubelet[2732]: E1213 14:30:54.709642 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.709981 kubelet[2732]: E1213 14:30:54.709965 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.710088 kubelet[2732]: W1213 14:30:54.710073 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.710187 kubelet[2732]: E1213 14:30:54.710177 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.721358 kubelet[2732]: E1213 14:30:54.721345 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.721451 kubelet[2732]: W1213 14:30:54.721442 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.721519 kubelet[2732]: E1213 14:30:54.721503 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.759998 env[1521]: time="2024-12-13T14:30:54.759951627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64649bd6d8-t5l6v,Uid:17426ce8-ff42-43db-a006-156e2c1c9224,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d41eb0fd48e9a7192ba8518d20ebc4d44572f13becc4388f4104cbaa0f19a3f\"" Dec 13 14:30:54.761893 env[1521]: time="2024-12-13T14:30:54.761865235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 14:30:54.764202 env[1521]: time="2024-12-13T14:30:54.764163044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rqnjh,Uid:a562b0a7-137e-4e62-b42a-5ec8c7292df0,Namespace:calico-system,Attempt:0,}" Dec 13 14:30:54.765411 kubelet[2732]: E1213 14:30:54.765155 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.765411 kubelet[2732]: W1213 14:30:54.765171 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.765411 kubelet[2732]: E1213 14:30:54.765273 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.765812 kubelet[2732]: E1213 14:30:54.765678 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.765812 kubelet[2732]: W1213 14:30:54.765712 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.765812 kubelet[2732]: E1213 14:30:54.765773 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.767916 kubelet[2732]: E1213 14:30:54.766540 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.767916 kubelet[2732]: W1213 14:30:54.766554 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.767916 kubelet[2732]: E1213 14:30:54.766575 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.767916 kubelet[2732]: E1213 14:30:54.766823 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.767916 kubelet[2732]: W1213 14:30:54.766833 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.767916 kubelet[2732]: E1213 14:30:54.766914 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.767916 kubelet[2732]: E1213 14:30:54.767005 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.767916 kubelet[2732]: W1213 14:30:54.767012 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.767916 kubelet[2732]: E1213 14:30:54.767150 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.767916 kubelet[2732]: E1213 14:30:54.767806 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.768327 kubelet[2732]: W1213 14:30:54.767818 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.768327 kubelet[2732]: E1213 14:30:54.767839 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.768639 kubelet[2732]: E1213 14:30:54.768603 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.768639 kubelet[2732]: W1213 14:30:54.768617 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.769065 kubelet[2732]: E1213 14:30:54.768882 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.769065 kubelet[2732]: E1213 14:30:54.768993 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.769065 kubelet[2732]: W1213 14:30:54.769000 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.769378 kubelet[2732]: E1213 14:30:54.769230 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.769378 kubelet[2732]: E1213 14:30:54.769271 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.769378 kubelet[2732]: W1213 14:30:54.769276 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.769575 kubelet[2732]: E1213 14:30:54.769533 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.769648 kubelet[2732]: E1213 14:30:54.769642 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.769701 kubelet[2732]: W1213 14:30:54.769693 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.770069 kubelet[2732]: E1213 14:30:54.770055 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.770385 kubelet[2732]: E1213 14:30:54.770372 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.770484 kubelet[2732]: W1213 14:30:54.770470 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.770676 kubelet[2732]: E1213 14:30:54.770655 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.770892 kubelet[2732]: E1213 14:30:54.770880 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.770981 kubelet[2732]: W1213 14:30:54.770968 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.771408 kubelet[2732]: E1213 14:30:54.771138 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.772076 kubelet[2732]: E1213 14:30:54.772062 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.772192 kubelet[2732]: W1213 14:30:54.772178 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.772366 kubelet[2732]: E1213 14:30:54.772350 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.772486 kubelet[2732]: E1213 14:30:54.772478 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.772628 kubelet[2732]: W1213 14:30:54.772617 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.772796 kubelet[2732]: E1213 14:30:54.772783 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.773426 kubelet[2732]: E1213 14:30:54.773411 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.773527 kubelet[2732]: W1213 14:30:54.773513 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.773698 kubelet[2732]: E1213 14:30:54.773686 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.773950 kubelet[2732]: E1213 14:30:54.773938 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.774044 kubelet[2732]: W1213 14:30:54.774030 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.774967 kubelet[2732]: E1213 14:30:54.774953 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.775297 kubelet[2732]: E1213 14:30:54.775281 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.775375 kubelet[2732]: W1213 14:30:54.775365 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.775750 kubelet[2732]: E1213 14:30:54.775712 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.776014 kubelet[2732]: E1213 14:30:54.775996 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.776431 kubelet[2732]: W1213 14:30:54.776415 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.776627 kubelet[2732]: E1213 14:30:54.776614 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.776909 kubelet[2732]: E1213 14:30:54.776896 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.777009 kubelet[2732]: W1213 14:30:54.776999 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.777327 kubelet[2732]: E1213 14:30:54.777252 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.781002 kubelet[2732]: E1213 14:30:54.780757 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.781002 kubelet[2732]: W1213 14:30:54.780775 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.781002 kubelet[2732]: E1213 14:30:54.780916 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.781181 kubelet[2732]: E1213 14:30:54.781064 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.781181 kubelet[2732]: W1213 14:30:54.781074 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.781413 kubelet[2732]: E1213 14:30:54.781191 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.781413 kubelet[2732]: E1213 14:30:54.781378 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.781413 kubelet[2732]: W1213 14:30:54.781389 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.782354 kubelet[2732]: E1213 14:30:54.781478 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.782354 kubelet[2732]: E1213 14:30:54.781615 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.782354 kubelet[2732]: W1213 14:30:54.781625 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.782354 kubelet[2732]: E1213 14:30:54.781751 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.782354 kubelet[2732]: E1213 14:30:54.781901 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.782354 kubelet[2732]: W1213 14:30:54.781911 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.782354 kubelet[2732]: E1213 14:30:54.781932 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.782354 kubelet[2732]: E1213 14:30:54.782335 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.782354 kubelet[2732]: W1213 14:30:54.782346 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.782692 kubelet[2732]: E1213 14:30:54.782362 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.795861 kubelet[2732]: E1213 14:30:54.795360 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:54.795861 kubelet[2732]: W1213 14:30:54.795397 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:54.795861 kubelet[2732]: E1213 14:30:54.795414 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:54.816520 env[1521]: time="2024-12-13T14:30:54.816446760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:54.817121 env[1521]: time="2024-12-13T14:30:54.817059662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:54.818350 env[1521]: time="2024-12-13T14:30:54.817261263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:54.820349 env[1521]: time="2024-12-13T14:30:54.820313976Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca7be5a445d04c6a85520a397dd477c80773b3fb4ca19660fc84ab3220851f92 pid=3176 runtime=io.containerd.runc.v2 Dec 13 14:30:54.875626 env[1521]: time="2024-12-13T14:30:54.875575903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rqnjh,Uid:a562b0a7-137e-4e62-b42a-5ec8c7292df0,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca7be5a445d04c6a85520a397dd477c80773b3fb4ca19660fc84ab3220851f92\"" Dec 13 14:30:55.229000 audit[3213]: NETFILTER_CFG table=filter:96 family=2 entries=17 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:30:55.229000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7ffd41d29120 a2=0 a3=7ffd41d2910c items=0 ppid=2867 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:55.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:30:55.234000 audit[3213]: NETFILTER_CFG table=nat:97 family=2 entries=12 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:30:55.234000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd41d29120 a2=0 a3=0 items=0 ppid=2867 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:30:55.234000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:30:56.251071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount720143116.mount: Deactivated successfully. Dec 13 14:30:56.977084 kubelet[2732]: E1213 14:30:56.977046 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-889kf" podUID="54fdb1bd-71c2-40c0-a409-956b5f88cc85" Dec 13 14:30:57.420713 env[1521]: time="2024-12-13T14:30:57.420662408Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:57.427167 env[1521]: time="2024-12-13T14:30:57.427130233Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:57.430636 env[1521]: time="2024-12-13T14:30:57.430606847Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:57.435401 env[1521]: time="2024-12-13T14:30:57.435369566Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:57.435891 env[1521]: time="2024-12-13T14:30:57.435859468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 14:30:57.446370 env[1521]: time="2024-12-13T14:30:57.446344209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 14:30:57.454659 env[1521]: time="2024-12-13T14:30:57.454627441Z" level=info msg="CreateContainer within sandbox \"3d41eb0fd48e9a7192ba8518d20ebc4d44572f13becc4388f4104cbaa0f19a3f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 14:30:57.485098 env[1521]: time="2024-12-13T14:30:57.485066161Z" level=info msg="CreateContainer within sandbox \"3d41eb0fd48e9a7192ba8518d20ebc4d44572f13becc4388f4104cbaa0f19a3f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7bcad5c65274b7d4cacec6090c8fa466c5e6e0edd41a470a4c333a47bd0b5437\"" Dec 13 14:30:57.485769 env[1521]: time="2024-12-13T14:30:57.485741663Z" level=info msg="StartContainer for \"7bcad5c65274b7d4cacec6090c8fa466c5e6e0edd41a470a4c333a47bd0b5437\"" Dec 13 14:30:57.561711 env[1521]: time="2024-12-13T14:30:57.561659261Z" level=info msg="StartContainer for \"7bcad5c65274b7d4cacec6090c8fa466c5e6e0edd41a470a4c333a47bd0b5437\" returns successfully" Dec 13 14:30:58.119914 kubelet[2732]: I1213 14:30:58.119881 2732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-64649bd6d8-t5l6v" podStartSLOduration=1.444947907 podStartE2EDuration="4.119844044s" podCreationTimestamp="2024-12-13 14:30:54 +0000 UTC" firstStartedPulling="2024-12-13 14:30:54.761301032 +0000 UTC m=+26.909100187" lastFinishedPulling="2024-12-13 14:30:57.436197069 +0000 UTC m=+29.583996324" observedRunningTime="2024-12-13 14:30:58.119661344 +0000 UTC m=+30.267460499" watchObservedRunningTime="2024-12-13 14:30:58.119844044 +0000 UTC m=+30.267643199" Dec 13 14:30:58.174556 kubelet[2732]: E1213 14:30:58.174524 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.174556 kubelet[2732]: W1213 14:30:58.174547 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.174901 kubelet[2732]: E1213 14:30:58.174575 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.174901 kubelet[2732]: E1213 14:30:58.174815 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.174901 kubelet[2732]: W1213 14:30:58.174829 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.174901 kubelet[2732]: E1213 14:30:58.174849 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.175085 kubelet[2732]: E1213 14:30:58.175022 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.175085 kubelet[2732]: W1213 14:30:58.175032 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.175085 kubelet[2732]: E1213 14:30:58.175046 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.175243 kubelet[2732]: E1213 14:30:58.175202 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.175243 kubelet[2732]: W1213 14:30:58.175212 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.175243 kubelet[2732]: E1213 14:30:58.175227 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.175408 kubelet[2732]: E1213 14:30:58.175389 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.175408 kubelet[2732]: W1213 14:30:58.175404 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.175525 kubelet[2732]: E1213 14:30:58.175419 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.175596 kubelet[2732]: E1213 14:30:58.175579 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.175596 kubelet[2732]: W1213 14:30:58.175592 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.175714 kubelet[2732]: E1213 14:30:58.175607 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.175807 kubelet[2732]: E1213 14:30:58.175792 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.175857 kubelet[2732]: W1213 14:30:58.175808 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.175857 kubelet[2732]: E1213 14:30:58.175824 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.176021 kubelet[2732]: E1213 14:30:58.175997 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.176021 kubelet[2732]: W1213 14:30:58.176006 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.176021 kubelet[2732]: E1213 14:30:58.176020 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.176206 kubelet[2732]: E1213 14:30:58.176193 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.176206 kubelet[2732]: W1213 14:30:58.176204 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.176335 kubelet[2732]: E1213 14:30:58.176218 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.176411 kubelet[2732]: E1213 14:30:58.176394 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.176411 kubelet[2732]: W1213 14:30:58.176408 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.176583 kubelet[2732]: E1213 14:30:58.176424 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.176634 kubelet[2732]: E1213 14:30:58.176593 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.176634 kubelet[2732]: W1213 14:30:58.176603 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.176634 kubelet[2732]: E1213 14:30:58.176618 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.176818 kubelet[2732]: E1213 14:30:58.176802 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.176818 kubelet[2732]: W1213 14:30:58.176815 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.176928 kubelet[2732]: E1213 14:30:58.176832 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.177022 kubelet[2732]: E1213 14:30:58.177006 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.177022 kubelet[2732]: W1213 14:30:58.177019 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.177146 kubelet[2732]: E1213 14:30:58.177033 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.177214 kubelet[2732]: E1213 14:30:58.177197 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.177214 kubelet[2732]: W1213 14:30:58.177210 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.177317 kubelet[2732]: E1213 14:30:58.177225 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.177402 kubelet[2732]: E1213 14:30:58.177389 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.177402 kubelet[2732]: W1213 14:30:58.177400 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.177490 kubelet[2732]: E1213 14:30:58.177414 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.192668 kubelet[2732]: E1213 14:30:58.191910 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.192668 kubelet[2732]: W1213 14:30:58.191930 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.192668 kubelet[2732]: E1213 14:30:58.191953 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.192668 kubelet[2732]: E1213 14:30:58.192211 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.192668 kubelet[2732]: W1213 14:30:58.192222 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.192668 kubelet[2732]: E1213 14:30:58.192242 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.192668 kubelet[2732]: E1213 14:30:58.192478 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.192668 kubelet[2732]: W1213 14:30:58.192489 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.192668 kubelet[2732]: E1213 14:30:58.192507 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.193189 kubelet[2732]: E1213 14:30:58.192766 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.193189 kubelet[2732]: W1213 14:30:58.192776 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.193189 kubelet[2732]: E1213 14:30:58.192795 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.193189 kubelet[2732]: E1213 14:30:58.193003 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.193189 kubelet[2732]: W1213 14:30:58.193012 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.193189 kubelet[2732]: E1213 14:30:58.193030 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.193439 kubelet[2732]: E1213 14:30:58.193213 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.193439 kubelet[2732]: W1213 14:30:58.193222 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.193439 kubelet[2732]: E1213 14:30:58.193308 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.193584 kubelet[2732]: E1213 14:30:58.193459 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.193584 kubelet[2732]: W1213 14:30:58.193468 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.193584 kubelet[2732]: E1213 14:30:58.193545 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.193729 kubelet[2732]: E1213 14:30:58.193664 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.193729 kubelet[2732]: W1213 14:30:58.193672 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.193842 kubelet[2732]: E1213 14:30:58.193765 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.193904 kubelet[2732]: E1213 14:30:58.193888 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.193947 kubelet[2732]: W1213 14:30:58.193904 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.193947 kubelet[2732]: E1213 14:30:58.193924 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.196139 kubelet[2732]: E1213 14:30:58.194355 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.196139 kubelet[2732]: W1213 14:30:58.194369 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.196139 kubelet[2732]: E1213 14:30:58.194390 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.196139 kubelet[2732]: E1213 14:30:58.194606 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.196139 kubelet[2732]: W1213 14:30:58.194616 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.196139 kubelet[2732]: E1213 14:30:58.194636 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.196139 kubelet[2732]: E1213 14:30:58.194845 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.196139 kubelet[2732]: W1213 14:30:58.194855 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.196139 kubelet[2732]: E1213 14:30:58.194874 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.196139 kubelet[2732]: E1213 14:30:58.195089 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.196638 kubelet[2732]: W1213 14:30:58.195099 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.196638 kubelet[2732]: E1213 14:30:58.195177 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.196638 kubelet[2732]: E1213 14:30:58.195659 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.196638 kubelet[2732]: W1213 14:30:58.195670 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.196638 kubelet[2732]: E1213 14:30:58.195692 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.196638 kubelet[2732]: E1213 14:30:58.195921 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.196638 kubelet[2732]: W1213 14:30:58.195932 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.196638 kubelet[2732]: E1213 14:30:58.195950 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.196638 kubelet[2732]: E1213 14:30:58.196183 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.196638 kubelet[2732]: W1213 14:30:58.196194 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.197004 kubelet[2732]: E1213 14:30:58.196214 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.197004 kubelet[2732]: E1213 14:30:58.196597 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.197004 kubelet[2732]: W1213 14:30:58.196608 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.197004 kubelet[2732]: E1213 14:30:58.196687 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.197004 kubelet[2732]: E1213 14:30:58.196835 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:58.197004 kubelet[2732]: W1213 14:30:58.196846 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:58.197004 kubelet[2732]: E1213 14:30:58.196861 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:58.977649 kubelet[2732]: E1213 14:30:58.977612 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-889kf" podUID="54fdb1bd-71c2-40c0-a409-956b5f88cc85" Dec 13 14:30:59.034758 env[1521]: time="2024-12-13T14:30:59.034694375Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:59.041082 env[1521]: time="2024-12-13T14:30:59.040748398Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:59.049335 env[1521]: time="2024-12-13T14:30:59.049293031Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:59.053038 env[1521]: time="2024-12-13T14:30:59.053009345Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:59.053469 env[1521]: time="2024-12-13T14:30:59.053438847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 14:30:59.055589 env[1521]: time="2024-12-13T14:30:59.055550355Z" level=info msg="CreateContainer within sandbox \"ca7be5a445d04c6a85520a397dd477c80773b3fb4ca19660fc84ab3220851f92\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 14:30:59.081356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount925124194.mount: Deactivated successfully. Dec 13 14:30:59.100334 env[1521]: time="2024-12-13T14:30:59.100284125Z" level=info msg="CreateContainer within sandbox \"ca7be5a445d04c6a85520a397dd477c80773b3fb4ca19660fc84ab3220851f92\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f403a94a96bf22ce7f9438a64ef362f3b9b32f766068d2e802995f9091dd7fc6\"" Dec 13 14:30:59.102031 env[1521]: time="2024-12-13T14:30:59.101973631Z" level=info msg="StartContainer for \"f403a94a96bf22ce7f9438a64ef362f3b9b32f766068d2e802995f9091dd7fc6\"" Dec 13 14:30:59.119160 kubelet[2732]: I1213 14:30:59.110540 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:30:59.137228 systemd[1]: run-containerd-runc-k8s.io-f403a94a96bf22ce7f9438a64ef362f3b9b32f766068d2e802995f9091dd7fc6-runc.KvE8g0.mount: Deactivated successfully. Dec 13 14:30:59.182523 env[1521]: time="2024-12-13T14:30:59.175950312Z" level=info msg="StartContainer for \"f403a94a96bf22ce7f9438a64ef362f3b9b32f766068d2e802995f9091dd7fc6\" returns successfully" Dec 13 14:30:59.184217 kubelet[2732]: E1213 14:30:59.184179 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:59.184217 kubelet[2732]: W1213 14:30:59.184204 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:59.184706 kubelet[2732]: E1213 14:30:59.184255 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:59.184706 kubelet[2732]: E1213 14:30:59.184450 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:59.184706 kubelet[2732]: W1213 14:30:59.184461 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:59.184706 kubelet[2732]: E1213 14:30:59.184479 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:59.184706 kubelet[2732]: E1213 14:30:59.184657 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:59.184706 kubelet[2732]: W1213 14:30:59.184667 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:59.184706 kubelet[2732]: E1213 14:30:59.184683 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:59.185045 kubelet[2732]: E1213 14:30:59.184896 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:59.185045 kubelet[2732]: W1213 14:30:59.184906 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:59.185045 kubelet[2732]: E1213 14:30:59.184922 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:59.185177 kubelet[2732]: E1213 14:30:59.185118 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:59.185177 kubelet[2732]: W1213 14:30:59.185128 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:59.185177 kubelet[2732]: E1213 14:30:59.185142 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:59.185652 kubelet[2732]: E1213 14:30:59.185322 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:59.185652 kubelet[2732]: W1213 14:30:59.185338 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:59.185652 kubelet[2732]: E1213 14:30:59.185357 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:59.185652 kubelet[2732]: E1213 14:30:59.185574 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:59.185652 kubelet[2732]: W1213 14:30:59.185586 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:59.185652 kubelet[2732]: E1213 14:30:59.185602 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:59.185999 kubelet[2732]: E1213 14:30:59.185816 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:59.185999 kubelet[2732]: W1213 14:30:59.185826 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:59.185999 kubelet[2732]: E1213 14:30:59.185841 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:59.186126 kubelet[2732]: E1213 14:30:59.186040 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:59.186126 kubelet[2732]: W1213 14:30:59.186049 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:59.186126 kubelet[2732]: E1213 14:30:59.186065 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:59.186268 kubelet[2732]: E1213 14:30:59.186245 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:59.186268 kubelet[2732]: W1213 14:30:59.186254 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:59.186268 kubelet[2732]: E1213 14:30:59.186268 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:59.190419 kubelet[2732]: E1213 14:30:59.187062 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:59.190419 kubelet[2732]: W1213 14:30:59.187080 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:59.190419 kubelet[2732]: E1213 14:30:59.187117 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:59.190419 kubelet[2732]: E1213 14:30:59.187372 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:59.190419 kubelet[2732]: W1213 14:30:59.187384 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:59.190419 kubelet[2732]: E1213 14:30:59.187402 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:59.190419 kubelet[2732]: E1213 14:30:59.187657 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:59.190419 kubelet[2732]: W1213 14:30:59.187669 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:59.190419 kubelet[2732]: E1213 14:30:59.187692 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:59.190419 kubelet[2732]: E1213 14:30:59.187982 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:59.190900 kubelet[2732]: W1213 14:30:59.187994 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:59.190900 kubelet[2732]: E1213 14:30:59.188019 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:30:59.190900 kubelet[2732]: E1213 14:30:59.188235 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:30:59.190900 kubelet[2732]: W1213 14:30:59.188248 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:30:59.190900 kubelet[2732]: E1213 14:30:59.188274 2732 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:31:00.074923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f403a94a96bf22ce7f9438a64ef362f3b9b32f766068d2e802995f9091dd7fc6-rootfs.mount: Deactivated successfully. Dec 13 14:31:00.619238 env[1521]: time="2024-12-13T14:31:00.619183964Z" level=info msg="shim disconnected" id=f403a94a96bf22ce7f9438a64ef362f3b9b32f766068d2e802995f9091dd7fc6 Dec 13 14:31:00.619788 env[1521]: time="2024-12-13T14:31:00.619293164Z" level=warning msg="cleaning up after shim disconnected" id=f403a94a96bf22ce7f9438a64ef362f3b9b32f766068d2e802995f9091dd7fc6 namespace=k8s.io Dec 13 14:31:00.619788 env[1521]: time="2024-12-13T14:31:00.619309665Z" level=info msg="cleaning up dead shim" Dec 13 14:31:00.627356 env[1521]: time="2024-12-13T14:31:00.627317895Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3360 runtime=io.containerd.runc.v2\n" Dec 13 14:31:00.978083 kubelet[2732]: E1213 14:31:00.977678 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-889kf" podUID="54fdb1bd-71c2-40c0-a409-956b5f88cc85" Dec 13 14:31:01.118079 env[1521]: time="2024-12-13T14:31:01.118038225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 14:31:02.977458 kubelet[2732]: E1213 14:31:02.977379 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-889kf" podUID="54fdb1bd-71c2-40c0-a409-956b5f88cc85" Dec 13 14:31:04.976788 kubelet[2732]: E1213 14:31:04.976758 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-889kf" podUID="54fdb1bd-71c2-40c0-a409-956b5f88cc85" Dec 13 14:31:06.977049 kubelet[2732]: E1213 14:31:06.977013 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-889kf" podUID="54fdb1bd-71c2-40c0-a409-956b5f88cc85" Dec 13 14:31:07.365363 env[1521]: time="2024-12-13T14:31:07.365309872Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:07.371400 env[1521]: time="2024-12-13T14:31:07.371359793Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:07.374092 env[1521]: time="2024-12-13T14:31:07.374059902Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:07.377321 env[1521]: time="2024-12-13T14:31:07.377290113Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:07.377796 env[1521]: time="2024-12-13T14:31:07.377763814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 14:31:07.380322 env[1521]: time="2024-12-13T14:31:07.380286623Z" level=info msg="CreateContainer within sandbox \"ca7be5a445d04c6a85520a397dd477c80773b3fb4ca19660fc84ab3220851f92\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 14:31:07.407079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2359923800.mount: Deactivated successfully. Dec 13 14:31:07.419886 env[1521]: time="2024-12-13T14:31:07.419844257Z" level=info msg="CreateContainer within sandbox \"ca7be5a445d04c6a85520a397dd477c80773b3fb4ca19660fc84ab3220851f92\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"850b846015f547b617cfd758445a71211dec8f8a5291085a43a15950b9ab84cf\"" Dec 13 14:31:07.421913 env[1521]: time="2024-12-13T14:31:07.420509859Z" level=info msg="StartContainer for \"850b846015f547b617cfd758445a71211dec8f8a5291085a43a15950b9ab84cf\"" Dec 13 14:31:07.490059 env[1521]: time="2024-12-13T14:31:07.490015794Z" level=info msg="StartContainer for \"850b846015f547b617cfd758445a71211dec8f8a5291085a43a15950b9ab84cf\" returns successfully" Dec 13 14:31:08.402583 systemd[1]: run-containerd-runc-k8s.io-850b846015f547b617cfd758445a71211dec8f8a5291085a43a15950b9ab84cf-runc.94PuwG.mount: Deactivated successfully. Dec 13 14:31:08.934086 env[1521]: time="2024-12-13T14:31:08.934018742Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:31:08.957006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-850b846015f547b617cfd758445a71211dec8f8a5291085a43a15950b9ab84cf-rootfs.mount: Deactivated successfully. Dec 13 14:31:08.977556 kubelet[2732]: E1213 14:31:08.977495 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-889kf" podUID="54fdb1bd-71c2-40c0-a409-956b5f88cc85" Dec 13 14:31:09.024765 kubelet[2732]: I1213 14:31:09.024715 2732 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:31:09.056599 kubelet[2732]: I1213 14:31:09.056563 2732 topology_manager.go:215] "Topology Admit Handler" podUID="91fc512f-0007-48c4-b73e-c4c21c09d9e8" podNamespace="kube-system" podName="coredns-76f75df574-f5p2k" Dec 13 14:31:09.063869 kubelet[2732]: I1213 14:31:09.063832 2732 topology_manager.go:215] "Topology Admit Handler" podUID="8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6" podNamespace="calico-system" podName="calico-kube-controllers-547bf79797-klbv4" Dec 13 14:31:09.069831 kubelet[2732]: I1213 14:31:09.069805 2732 topology_manager.go:215] "Topology Admit Handler" podUID="fc5c0614-dc8c-44ff-bad4-378662704e03" podNamespace="kube-system" podName="coredns-76f75df574-mtlt2" Dec 13 14:31:09.070410 kubelet[2732]: I1213 14:31:09.070386 2732 topology_manager.go:215] "Topology Admit Handler" podUID="b609b60d-e7a7-460b-9ba5-524f51fccf87" podNamespace="calico-apiserver" podName="calico-apiserver-79547c4d58-hf5ns" Dec 13 14:31:09.072378 kubelet[2732]: I1213 14:31:09.071585 2732 topology_manager.go:215] "Topology Admit Handler" podUID="5544211b-a287-4828-991d-868381eea812" podNamespace="calico-apiserver" podName="calico-apiserver-79547c4d58-vbfr7" Dec 13 14:31:09.076982 kubelet[2732]: I1213 14:31:09.076959 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr8cn\" (UniqueName: \"kubernetes.io/projected/91fc512f-0007-48c4-b73e-c4c21c09d9e8-kube-api-access-mr8cn\") pod \"coredns-76f75df574-f5p2k\" (UID: \"91fc512f-0007-48c4-b73e-c4c21c09d9e8\") " pod="kube-system/coredns-76f75df574-f5p2k" Dec 13 14:31:09.077081 kubelet[2732]: I1213 14:31:09.077006 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91fc512f-0007-48c4-b73e-c4c21c09d9e8-config-volume\") pod \"coredns-76f75df574-f5p2k\" (UID: \"91fc512f-0007-48c4-b73e-c4c21c09d9e8\") " pod="kube-system/coredns-76f75df574-f5p2k" Dec 13 14:31:09.178263 kubelet[2732]: I1213 14:31:09.178226 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jpc8\" (UniqueName: \"kubernetes.io/projected/fc5c0614-dc8c-44ff-bad4-378662704e03-kube-api-access-9jpc8\") pod \"coredns-76f75df574-mtlt2\" (UID: \"fc5c0614-dc8c-44ff-bad4-378662704e03\") " pod="kube-system/coredns-76f75df574-mtlt2" Dec 13 14:31:09.179041 kubelet[2732]: I1213 14:31:09.179016 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6-tigera-ca-bundle\") pod \"calico-kube-controllers-547bf79797-klbv4\" (UID: \"8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6\") " pod="calico-system/calico-kube-controllers-547bf79797-klbv4" Dec 13 14:31:09.179262 kubelet[2732]: I1213 14:31:09.179244 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5544211b-a287-4828-991d-868381eea812-calico-apiserver-certs\") pod \"calico-apiserver-79547c4d58-vbfr7\" (UID: \"5544211b-a287-4828-991d-868381eea812\") " pod="calico-apiserver/calico-apiserver-79547c4d58-vbfr7" Dec 13 14:31:09.179830 kubelet[2732]: I1213 14:31:09.179799 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq9xp\" (UniqueName: \"kubernetes.io/projected/b609b60d-e7a7-460b-9ba5-524f51fccf87-kube-api-access-cq9xp\") pod \"calico-apiserver-79547c4d58-hf5ns\" (UID: \"b609b60d-e7a7-460b-9ba5-524f51fccf87\") " pod="calico-apiserver/calico-apiserver-79547c4d58-hf5ns" Dec 13 14:31:09.180104 kubelet[2732]: I1213 14:31:09.180091 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc5c0614-dc8c-44ff-bad4-378662704e03-config-volume\") pod \"coredns-76f75df574-mtlt2\" (UID: \"fc5c0614-dc8c-44ff-bad4-378662704e03\") " pod="kube-system/coredns-76f75df574-mtlt2" Dec 13 14:31:09.180312 kubelet[2732]: I1213 14:31:09.180300 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b609b60d-e7a7-460b-9ba5-524f51fccf87-calico-apiserver-certs\") pod \"calico-apiserver-79547c4d58-hf5ns\" (UID: \"b609b60d-e7a7-460b-9ba5-524f51fccf87\") " pod="calico-apiserver/calico-apiserver-79547c4d58-hf5ns" Dec 13 14:31:09.180669 kubelet[2732]: I1213 14:31:09.180655 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq2t6\" (UniqueName: \"kubernetes.io/projected/8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6-kube-api-access-rq2t6\") pod \"calico-kube-controllers-547bf79797-klbv4\" (UID: \"8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6\") " pod="calico-system/calico-kube-controllers-547bf79797-klbv4" Dec 13 14:31:09.180946 kubelet[2732]: I1213 14:31:09.180839 2732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk2sx\" (UniqueName: \"kubernetes.io/projected/5544211b-a287-4828-991d-868381eea812-kube-api-access-jk2sx\") pod \"calico-apiserver-79547c4d58-vbfr7\" (UID: \"5544211b-a287-4828-991d-868381eea812\") " pod="calico-apiserver/calico-apiserver-79547c4d58-vbfr7" Dec 13 14:31:09.363403 env[1521]: time="2024-12-13T14:31:09.363347460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f5p2k,Uid:91fc512f-0007-48c4-b73e-c4c21c09d9e8,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:09.366439 env[1521]: time="2024-12-13T14:31:09.366400370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547bf79797-klbv4,Uid:8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6,Namespace:calico-system,Attempt:0,}" Dec 13 14:31:09.372839 env[1521]: time="2024-12-13T14:31:09.372807892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mtlt2,Uid:fc5c0614-dc8c-44ff-bad4-378662704e03,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:09.386523 env[1521]: time="2024-12-13T14:31:09.386474837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79547c4d58-hf5ns,Uid:b609b60d-e7a7-460b-9ba5-524f51fccf87,Namespace:calico-apiserver,Attempt:0,}" Dec 13 14:31:09.386875 env[1521]: time="2024-12-13T14:31:09.386847338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79547c4d58-vbfr7,Uid:5544211b-a287-4828-991d-868381eea812,Namespace:calico-apiserver,Attempt:0,}" Dec 13 14:31:10.614840 env[1521]: time="2024-12-13T14:31:10.614753761Z" level=info msg="shim disconnected" id=850b846015f547b617cfd758445a71211dec8f8a5291085a43a15950b9ab84cf Dec 13 14:31:10.614840 env[1521]: time="2024-12-13T14:31:10.614812661Z" level=warning msg="cleaning up after shim disconnected" id=850b846015f547b617cfd758445a71211dec8f8a5291085a43a15950b9ab84cf namespace=k8s.io Dec 13 14:31:10.614840 env[1521]: time="2024-12-13T14:31:10.614828461Z" level=info msg="cleaning up dead shim" Dec 13 14:31:10.631300 env[1521]: time="2024-12-13T14:31:10.631248315Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3425 runtime=io.containerd.runc.v2\n" Dec 13 14:31:10.867117 env[1521]: time="2024-12-13T14:31:10.866961382Z" level=error msg="Failed to destroy network for sandbox \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.867838 env[1521]: time="2024-12-13T14:31:10.867784585Z" level=error msg="encountered an error cleaning up failed sandbox \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.868047 env[1521]: time="2024-12-13T14:31:10.868004385Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f5p2k,Uid:91fc512f-0007-48c4-b73e-c4c21c09d9e8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.868445 kubelet[2732]: E1213 14:31:10.868419 2732 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.868915 kubelet[2732]: E1213 14:31:10.868499 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f5p2k" Dec 13 14:31:10.868915 kubelet[2732]: E1213 14:31:10.868531 2732 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f5p2k" Dec 13 14:31:10.868915 kubelet[2732]: E1213 14:31:10.868610 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-f5p2k_kube-system(91fc512f-0007-48c4-b73e-c4c21c09d9e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-f5p2k_kube-system(91fc512f-0007-48c4-b73e-c4c21c09d9e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-f5p2k" podUID="91fc512f-0007-48c4-b73e-c4c21c09d9e8" Dec 13 14:31:10.898510 env[1521]: time="2024-12-13T14:31:10.898447284Z" level=error msg="Failed to destroy network for sandbox \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.899094 env[1521]: time="2024-12-13T14:31:10.899043286Z" level=error msg="encountered an error cleaning up failed sandbox \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.899310 env[1521]: time="2024-12-13T14:31:10.899263987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79547c4d58-vbfr7,Uid:5544211b-a287-4828-991d-868381eea812,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.899765 kubelet[2732]: E1213 14:31:10.899739 2732 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.899872 kubelet[2732]: E1213 14:31:10.899816 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79547c4d58-vbfr7" Dec 13 14:31:10.899872 kubelet[2732]: E1213 14:31:10.899848 2732 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79547c4d58-vbfr7" Dec 13 14:31:10.899968 kubelet[2732]: E1213 14:31:10.899935 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79547c4d58-vbfr7_calico-apiserver(5544211b-a287-4828-991d-868381eea812)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79547c4d58-vbfr7_calico-apiserver(5544211b-a287-4828-991d-868381eea812)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79547c4d58-vbfr7" podUID="5544211b-a287-4828-991d-868381eea812" Dec 13 14:31:10.929847 env[1521]: time="2024-12-13T14:31:10.929781586Z" level=error msg="Failed to destroy network for sandbox \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.930441 env[1521]: time="2024-12-13T14:31:10.930392188Z" level=error msg="encountered an error cleaning up failed sandbox \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.930689 env[1521]: time="2024-12-13T14:31:10.930645189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mtlt2,Uid:fc5c0614-dc8c-44ff-bad4-378662704e03,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.931978 kubelet[2732]: E1213 14:31:10.931036 2732 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.931978 kubelet[2732]: E1213 14:31:10.931094 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mtlt2" Dec 13 14:31:10.931978 kubelet[2732]: E1213 14:31:10.931123 2732 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mtlt2" Dec 13 14:31:10.932192 kubelet[2732]: E1213 14:31:10.931192 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mtlt2_kube-system(fc5c0614-dc8c-44ff-bad4-378662704e03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mtlt2_kube-system(fc5c0614-dc8c-44ff-bad4-378662704e03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mtlt2" podUID="fc5c0614-dc8c-44ff-bad4-378662704e03" Dec 13 14:31:10.933309 env[1521]: time="2024-12-13T14:31:10.933167697Z" level=error msg="Failed to destroy network for sandbox \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.933845 env[1521]: time="2024-12-13T14:31:10.933808399Z" level=error msg="encountered an error cleaning up failed sandbox \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.933998 env[1521]: time="2024-12-13T14:31:10.933966800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79547c4d58-hf5ns,Uid:b609b60d-e7a7-460b-9ba5-524f51fccf87,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.934339 kubelet[2732]: E1213 14:31:10.934294 2732 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.934591 kubelet[2732]: E1213 14:31:10.934454 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79547c4d58-hf5ns" Dec 13 14:31:10.934591 kubelet[2732]: E1213 14:31:10.934488 2732 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79547c4d58-hf5ns" Dec 13 14:31:10.934591 kubelet[2732]: E1213 14:31:10.934548 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79547c4d58-hf5ns_calico-apiserver(b609b60d-e7a7-460b-9ba5-524f51fccf87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79547c4d58-hf5ns_calico-apiserver(b609b60d-e7a7-460b-9ba5-524f51fccf87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79547c4d58-hf5ns" podUID="b609b60d-e7a7-460b-9ba5-524f51fccf87" Dec 13 14:31:10.943497 env[1521]: time="2024-12-13T14:31:10.943445231Z" level=error msg="Failed to destroy network for sandbox \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.943900 env[1521]: time="2024-12-13T14:31:10.943853432Z" level=error msg="encountered an error cleaning up failed sandbox \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.944006 env[1521]: time="2024-12-13T14:31:10.943903332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547bf79797-klbv4,Uid:8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.944201 kubelet[2732]: E1213 14:31:10.944182 2732 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:10.944329 kubelet[2732]: E1213 14:31:10.944230 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-547bf79797-klbv4" Dec 13 14:31:10.944329 kubelet[2732]: E1213 14:31:10.944256 2732 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-547bf79797-klbv4" Dec 13 14:31:10.944455 kubelet[2732]: E1213 14:31:10.944329 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-547bf79797-klbv4_calico-system(8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-547bf79797-klbv4_calico-system(8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-547bf79797-klbv4" podUID="8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6" Dec 13 14:31:10.980443 env[1521]: time="2024-12-13T14:31:10.980400051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-889kf,Uid:54fdb1bd-71c2-40c0-a409-956b5f88cc85,Namespace:calico-system,Attempt:0,}" Dec 13 14:31:11.058277 env[1521]: time="2024-12-13T14:31:11.058213502Z" level=error msg="Failed to destroy network for sandbox \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:11.058595 env[1521]: time="2024-12-13T14:31:11.058554803Z" level=error msg="encountered an error cleaning up failed sandbox \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:11.058685 env[1521]: time="2024-12-13T14:31:11.058617903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-889kf,Uid:54fdb1bd-71c2-40c0-a409-956b5f88cc85,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:11.058929 kubelet[2732]: E1213 14:31:11.058905 2732 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:11.059020 kubelet[2732]: E1213 14:31:11.058976 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-889kf" Dec 13 14:31:11.059020 kubelet[2732]: E1213 14:31:11.059004 2732 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-889kf" Dec 13 14:31:11.059124 kubelet[2732]: E1213 14:31:11.059078 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-889kf_calico-system(54fdb1bd-71c2-40c0-a409-956b5f88cc85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-889kf_calico-system(54fdb1bd-71c2-40c0-a409-956b5f88cc85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-889kf" podUID="54fdb1bd-71c2-40c0-a409-956b5f88cc85" Dec 13 14:31:11.142156 kubelet[2732]: I1213 14:31:11.138161 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Dec 13 14:31:11.142156 kubelet[2732]: I1213 14:31:11.140703 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Dec 13 14:31:11.142373 env[1521]: time="2024-12-13T14:31:11.139175862Z" level=info msg="StopPodSandbox for \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\"" Dec 13 14:31:11.142373 env[1521]: time="2024-12-13T14:31:11.141270169Z" level=info msg="StopPodSandbox for \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\"" Dec 13 14:31:11.142486 kubelet[2732]: I1213 14:31:11.142459 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Dec 13 14:31:11.146429 env[1521]: time="2024-12-13T14:31:11.146397786Z" level=info msg="StopPodSandbox for \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\"" Dec 13 14:31:11.147331 kubelet[2732]: I1213 14:31:11.146814 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Dec 13 14:31:11.147873 env[1521]: time="2024-12-13T14:31:11.147844790Z" level=info msg="StopPodSandbox for \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\"" Dec 13 14:31:11.149145 kubelet[2732]: I1213 14:31:11.149123 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Dec 13 14:31:11.149766 env[1521]: time="2024-12-13T14:31:11.149704396Z" level=info msg="StopPodSandbox for \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\"" Dec 13 14:31:11.152670 kubelet[2732]: I1213 14:31:11.152648 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Dec 13 14:31:11.153352 env[1521]: time="2024-12-13T14:31:11.153306108Z" level=info msg="StopPodSandbox for \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\"" Dec 13 14:31:11.172642 env[1521]: time="2024-12-13T14:31:11.172606870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 14:31:11.228018 env[1521]: time="2024-12-13T14:31:11.227953748Z" level=error msg="StopPodSandbox for \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\" failed" error="failed to destroy network for sandbox \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:11.228741 kubelet[2732]: E1213 14:31:11.228498 2732 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Dec 13 14:31:11.228741 kubelet[2732]: E1213 14:31:11.228602 2732 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5"} Dec 13 14:31:11.228741 kubelet[2732]: E1213 14:31:11.228654 2732 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5544211b-a287-4828-991d-868381eea812\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:31:11.228741 kubelet[2732]: E1213 14:31:11.228700 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5544211b-a287-4828-991d-868381eea812\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79547c4d58-vbfr7" podUID="5544211b-a287-4828-991d-868381eea812" Dec 13 14:31:11.236713 env[1521]: time="2024-12-13T14:31:11.236653076Z" level=error msg="StopPodSandbox for \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\" failed" error="failed to destroy network for sandbox \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:11.237007 kubelet[2732]: E1213 14:31:11.236972 2732 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Dec 13 14:31:11.237103 kubelet[2732]: E1213 14:31:11.237035 2732 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b"} Dec 13 14:31:11.237103 kubelet[2732]: E1213 14:31:11.237095 2732 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91fc512f-0007-48c4-b73e-c4c21c09d9e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:31:11.237246 kubelet[2732]: E1213 14:31:11.237150 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91fc512f-0007-48c4-b73e-c4c21c09d9e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-f5p2k" podUID="91fc512f-0007-48c4-b73e-c4c21c09d9e8" Dec 13 14:31:11.283274 env[1521]: time="2024-12-13T14:31:11.283193125Z" level=error msg="StopPodSandbox for \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\" failed" error="failed to destroy network for sandbox \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:11.283560 kubelet[2732]: E1213 14:31:11.283534 2732 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Dec 13 14:31:11.283667 kubelet[2732]: E1213 14:31:11.283588 2732 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7"} Dec 13 14:31:11.283667 kubelet[2732]: E1213 14:31:11.283649 2732 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b609b60d-e7a7-460b-9ba5-524f51fccf87\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:31:11.283829 kubelet[2732]: E1213 14:31:11.283703 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b609b60d-e7a7-460b-9ba5-524f51fccf87\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79547c4d58-hf5ns" podUID="b609b60d-e7a7-460b-9ba5-524f51fccf87" Dec 13 14:31:11.285514 env[1521]: time="2024-12-13T14:31:11.285463333Z" level=error msg="StopPodSandbox for \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\" failed" error="failed to destroy network for sandbox \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:11.285910 kubelet[2732]: E1213 14:31:11.285746 2732 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Dec 13 14:31:11.285910 kubelet[2732]: E1213 14:31:11.285785 2732 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867"} Dec 13 14:31:11.285910 kubelet[2732]: E1213 14:31:11.285828 2732 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:31:11.285910 kubelet[2732]: E1213 14:31:11.285863 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-547bf79797-klbv4" podUID="8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6" Dec 13 14:31:11.297244 env[1521]: time="2024-12-13T14:31:11.297202270Z" level=error msg="StopPodSandbox for \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\" failed" error="failed to destroy network for sandbox \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:11.297428 kubelet[2732]: E1213 14:31:11.297402 2732 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Dec 13 14:31:11.297520 kubelet[2732]: E1213 14:31:11.297438 2732 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261"} Dec 13 14:31:11.297520 kubelet[2732]: E1213 14:31:11.297481 2732 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"54fdb1bd-71c2-40c0-a409-956b5f88cc85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:31:11.297650 kubelet[2732]: E1213 14:31:11.297525 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"54fdb1bd-71c2-40c0-a409-956b5f88cc85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-889kf" podUID="54fdb1bd-71c2-40c0-a409-956b5f88cc85" Dec 13 14:31:11.298906 env[1521]: time="2024-12-13T14:31:11.298865676Z" level=error msg="StopPodSandbox for \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\" failed" error="failed to destroy network for sandbox \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:31:11.299111 kubelet[2732]: E1213 14:31:11.299086 2732 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Dec 13 14:31:11.299194 kubelet[2732]: E1213 14:31:11.299121 2732 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c"} Dec 13 14:31:11.299194 kubelet[2732]: E1213 14:31:11.299165 2732 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fc5c0614-dc8c-44ff-bad4-378662704e03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:31:11.299282 kubelet[2732]: E1213 14:31:11.299199 2732 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fc5c0614-dc8c-44ff-bad4-378662704e03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mtlt2" podUID="fc5c0614-dc8c-44ff-bad4-378662704e03" Dec 13 14:31:11.696496 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7-shm.mount: Deactivated successfully. Dec 13 14:31:11.696699 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c-shm.mount: Deactivated successfully. Dec 13 14:31:11.696891 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867-shm.mount: Deactivated successfully. Dec 13 14:31:11.697040 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5-shm.mount: Deactivated successfully. Dec 13 14:31:11.697192 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b-shm.mount: Deactivated successfully. Dec 13 14:31:12.370494 kubelet[2732]: I1213 14:31:12.370459 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:31:12.414749 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 13 14:31:12.414881 kernel: audit: type=1325 audit(1734100272.403:298): table=filter:98 family=2 entries=17 op=nft_register_rule pid=3744 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:12.403000 audit[3744]: NETFILTER_CFG table=filter:98 family=2 entries=17 op=nft_register_rule pid=3744 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:12.403000 audit[3744]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffce9c00930 a2=0 a3=7ffce9c0091c items=0 ppid=2867 pid=3744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:12.439405 kernel: audit: type=1300 audit(1734100272.403:298): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffce9c00930 a2=0 a3=7ffce9c0091c items=0 ppid=2867 pid=3744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:12.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:12.444000 audit[3744]: NETFILTER_CFG table=nat:99 family=2 entries=19 op=nft_register_chain pid=3744 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:12.459420 kernel: audit: type=1327 audit(1734100272.403:298): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:12.459502 kernel: audit: type=1325 audit(1734100272.444:299): table=nat:99 family=2 entries=19 op=nft_register_chain pid=3744 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:12.459535 kernel: audit: type=1300 audit(1734100272.444:299): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffce9c00930 a2=0 a3=7ffce9c0091c items=0 ppid=2867 pid=3744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:12.444000 audit[3744]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffce9c00930 a2=0 a3=7ffce9c0091c items=0 ppid=2867 pid=3744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:12.444000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:12.489488 kernel: audit: type=1327 audit(1734100272.444:299): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:19.500458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount160754790.mount: Deactivated successfully. Dec 13 14:31:19.541669 env[1521]: time="2024-12-13T14:31:19.541608974Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:19.547974 env[1521]: time="2024-12-13T14:31:19.547935392Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:19.551215 env[1521]: time="2024-12-13T14:31:19.551183102Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:19.554436 env[1521]: time="2024-12-13T14:31:19.554406711Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:19.554812 env[1521]: time="2024-12-13T14:31:19.554780212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 14:31:19.574538 env[1521]: time="2024-12-13T14:31:19.574500170Z" level=info msg="CreateContainer within sandbox \"ca7be5a445d04c6a85520a397dd477c80773b3fb4ca19660fc84ab3220851f92\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 14:31:19.621264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2453813551.mount: Deactivated successfully. Dec 13 14:31:19.632798 env[1521]: time="2024-12-13T14:31:19.632748541Z" level=info msg="CreateContainer within sandbox \"ca7be5a445d04c6a85520a397dd477c80773b3fb4ca19660fc84ab3220851f92\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0c8807f5ea6dbd33b8c31a67d5730c9cc8fe2aec7030aae3b7f858606b495407\"" Dec 13 14:31:19.635506 env[1521]: time="2024-12-13T14:31:19.633493543Z" level=info msg="StartContainer for \"0c8807f5ea6dbd33b8c31a67d5730c9cc8fe2aec7030aae3b7f858606b495407\"" Dec 13 14:31:19.691921 env[1521]: time="2024-12-13T14:31:19.691837514Z" level=info msg="StartContainer for \"0c8807f5ea6dbd33b8c31a67d5730c9cc8fe2aec7030aae3b7f858606b495407\" returns successfully" Dec 13 14:31:20.046675 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 14:31:20.046848 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Dec 13 14:31:20.205588 kubelet[2732]: I1213 14:31:20.203976 2732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-rqnjh" podStartSLOduration=1.5257406040000001 podStartE2EDuration="26.203908608s" podCreationTimestamp="2024-12-13 14:30:54 +0000 UTC" firstStartedPulling="2024-12-13 14:30:54.876936309 +0000 UTC m=+27.024735464" lastFinishedPulling="2024-12-13 14:31:19.555104213 +0000 UTC m=+51.702903468" observedRunningTime="2024-12-13 14:31:20.203197906 +0000 UTC m=+52.350997061" watchObservedRunningTime="2024-12-13 14:31:20.203908608 +0000 UTC m=+52.351707763" Dec 13 14:31:21.201120 systemd[1]: run-containerd-runc-k8s.io-0c8807f5ea6dbd33b8c31a67d5730c9cc8fe2aec7030aae3b7f858606b495407-runc.erWisd.mount: Deactivated successfully. Dec 13 14:31:21.355000 audit[3902]: AVC avc: denied { write } for pid=3902 comm="tee" name="fd" dev="proc" ino=31821 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:31:21.371755 kernel: audit: type=1400 audit(1734100281.355:300): avc: denied { write } for pid=3902 comm="tee" name="fd" dev="proc" ino=31821 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:31:21.355000 audit[3902]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc107f69f9 a2=241 a3=1b6 items=1 ppid=3868 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.394748 kernel: audit: type=1300 audit(1734100281.355:300): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc107f69f9 a2=241 a3=1b6 items=1 ppid=3868 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.355000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Dec 13 14:31:21.355000 audit: PATH item=0 name="/dev/fd/63" inode=31815 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:21.424713 kernel: audit: type=1307 audit(1734100281.355:300): cwd="/etc/service/enabled/node-status-reporter/log" Dec 13 14:31:21.424844 kernel: audit: type=1302 audit(1734100281.355:300): item=0 name="/dev/fd/63" inode=31815 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:21.355000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:31:21.438739 kernel: audit: type=1327 audit(1734100281.355:300): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:31:21.398000 audit[3904]: AVC avc: denied { write } for pid=3904 comm="tee" name="fd" dev="proc" ino=31830 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:31:21.456754 kernel: audit: type=1400 audit(1734100281.398:301): avc: denied { write } for pid=3904 comm="tee" name="fd" dev="proc" ino=31830 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:31:21.456857 kernel: audit: type=1300 audit(1734100281.398:301): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc6d5dca08 a2=241 a3=1b6 items=1 ppid=3870 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.398000 audit[3904]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc6d5dca08 a2=241 a3=1b6 items=1 ppid=3870 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.398000 audit: CWD cwd="/etc/service/enabled/bird6/log" Dec 13 14:31:21.486753 kernel: audit: type=1307 audit(1734100281.398:301): cwd="/etc/service/enabled/bird6/log" Dec 13 14:31:21.398000 audit: PATH item=0 name="/dev/fd/63" inode=31818 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:21.501738 kernel: audit: type=1302 audit(1734100281.398:301): item=0 name="/dev/fd/63" inode=31818 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:21.398000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:31:21.519727 kernel: audit: type=1327 audit(1734100281.398:301): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:31:21.426000 audit[3909]: AVC avc: denied { write } for pid=3909 comm="tee" name="fd" dev="proc" ino=30913 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:31:21.426000 audit[3909]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff56ca3a08 a2=241 a3=1b6 items=1 ppid=3865 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.426000 audit: CWD cwd="/etc/service/enabled/felix/log" Dec 13 14:31:21.426000 audit: PATH item=0 name="/dev/fd/63" inode=30896 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:21.426000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:31:21.427000 audit[3906]: AVC avc: denied { write } for pid=3906 comm="tee" name="fd" dev="proc" ino=30915 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:31:21.427000 audit[3906]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc90b3ca08 a2=241 a3=1b6 items=1 ppid=3872 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.427000 audit: CWD cwd="/etc/service/enabled/confd/log" Dec 13 14:31:21.427000 audit: PATH item=0 name="/dev/fd/63" inode=30893 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:21.427000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:31:21.457000 audit[3935]: AVC avc: denied { write } for pid=3935 comm="tee" name="fd" dev="proc" ino=30926 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:31:21.457000 audit[3935]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd1c1929f8 a2=241 a3=1b6 items=1 ppid=3880 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.457000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Dec 13 14:31:21.457000 audit: PATH item=0 name="/dev/fd/63" inode=30917 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:21.457000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:31:21.477000 audit[3937]: AVC avc: denied { write } for pid=3937 comm="tee" name="fd" dev="proc" ino=31834 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:31:21.477000 audit[3937]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe754f7a09 a2=241 a3=1b6 items=1 ppid=3883 pid=3937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.477000 audit: CWD cwd="/etc/service/enabled/bird/log" Dec 13 14:31:21.477000 audit: PATH item=0 name="/dev/fd/63" inode=30920 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:21.477000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:31:21.504000 audit[3942]: AVC avc: denied { write } for pid=3942 comm="tee" name="fd" dev="proc" ino=30931 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:31:21.504000 audit[3942]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd8593ea0a a2=241 a3=1b6 items=1 ppid=3886 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.504000 audit: CWD cwd="/etc/service/enabled/cni/log" Dec 13 14:31:21.504000 audit: PATH item=0 name="/dev/fd/63" inode=30928 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:31:21.504000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:31:21.737000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.737000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.737000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.737000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.737000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.737000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.737000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.737000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.737000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.737000 audit: BPF prog-id=10 op=LOAD Dec 13 14:31:21.737000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe23fc5c60 a2=98 a3=3 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.737000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.737000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit: BPF prog-id=11 op=LOAD Dec 13 14:31:21.738000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe23fc5a40 a2=74 a3=540051 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.738000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.738000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.738000 audit: BPF prog-id=12 op=LOAD Dec 13 14:31:21.738000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe23fc5a70 a2=94 a3=2 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.738000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.738000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:31:21.876000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.876000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.876000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.876000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.876000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.876000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.876000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.876000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.876000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.876000 audit: BPF prog-id=13 op=LOAD Dec 13 14:31:21.876000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe23fc5930 a2=40 a3=1 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.876000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.877000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:31:21.877000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.877000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe23fc5a00 a2=50 a3=7ffe23fc5ae0 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.877000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.886000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.886000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe23fc5940 a2=28 a3=0 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.886000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.886000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe23fc5970 a2=28 a3=0 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.886000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.886000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe23fc5880 a2=28 a3=0 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.886000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.886000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe23fc5990 a2=28 a3=0 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.886000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.886000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe23fc5970 a2=28 a3=0 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.886000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.886000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe23fc5960 a2=28 a3=0 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.886000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.886000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe23fc5990 a2=28 a3=0 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.886000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.886000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe23fc5970 a2=28 a3=0 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.886000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.886000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe23fc5990 a2=28 a3=0 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.886000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.886000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe23fc5960 a2=28 a3=0 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.886000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.886000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe23fc59d0 a2=28 a3=0 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe23fc5780 a2=50 a3=1 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.887000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit: BPF prog-id=14 op=LOAD Dec 13 14:31:21.887000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe23fc5780 a2=94 a3=5 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.887000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.887000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe23fc5830 a2=50 a3=1 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.887000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe23fc5950 a2=4 a3=38 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.887000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { confidentiality } for pid=3972 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:31:21.887000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe23fc59a0 a2=94 a3=6 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.887000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { confidentiality } for pid=3972 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:31:21.887000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe23fc5150 a2=94 a3=83 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.887000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { bpf } for pid=3972 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: AVC avc: denied { perfmon } for pid=3972 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.887000 audit[3972]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe23fc5150 a2=94 a3=83 items=0 ppid=3867 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.887000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { bpf } for pid=3979 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { bpf } for pid=3979 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { perfmon } for pid=3979 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { perfmon } for pid=3979 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { perfmon } for pid=3979 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { perfmon } for pid=3979 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { perfmon } for pid=3979 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { bpf } for pid=3979 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { bpf } for pid=3979 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit: BPF prog-id=15 op=LOAD Dec 13 14:31:21.896000 audit[3979]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6e5ba8d0 a2=98 a3=1999999999999999 items=0 ppid=3867 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.896000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:31:21.896000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { bpf } for pid=3979 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { bpf } for pid=3979 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { perfmon } for pid=3979 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { perfmon } for pid=3979 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { perfmon } for pid=3979 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { perfmon } for pid=3979 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { perfmon } for pid=3979 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { bpf } for pid=3979 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { bpf } for pid=3979 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit: BPF prog-id=16 op=LOAD Dec 13 14:31:21.896000 audit[3979]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6e5ba7b0 a2=74 a3=ffff items=0 ppid=3867 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.896000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:31:21.896000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { bpf } for pid=3979 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { bpf } for pid=3979 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { perfmon } for pid=3979 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { perfmon } for pid=3979 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { perfmon } for pid=3979 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { perfmon } for pid=3979 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { perfmon } for pid=3979 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { bpf } for pid=3979 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit[3979]: AVC avc: denied { bpf } for pid=3979 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:21.896000 audit: BPF prog-id=17 op=LOAD Dec 13 14:31:21.896000 audit[3979]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6e5ba7f0 a2=40 a3=7ffc6e5ba9d0 items=0 ppid=3867 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:21.896000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:31:21.897000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:31:21.979388 env[1521]: time="2024-12-13T14:31:21.979343226Z" level=info msg="StopPodSandbox for \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\"" Dec 13 14:31:22.020581 systemd-networkd[1701]: vxlan.calico: Link UP Dec 13 14:31:22.020591 systemd-networkd[1701]: vxlan.calico: Gained carrier Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit: BPF prog-id=18 op=LOAD Dec 13 14:31:22.062000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda4c0f660 a2=98 a3=ffffffff items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.062000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.062000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit: BPF prog-id=19 op=LOAD Dec 13 14:31:22.062000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda4c0f470 a2=74 a3=540051 items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.062000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.062000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.062000 audit: BPF prog-id=20 op=LOAD Dec 13 14:31:22.062000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda4c0f4a0 a2=94 a3=2 items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.062000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.063000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:31:22.063000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.063000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffda4c0f370 a2=28 a3=0 items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.063000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.063000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.063000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffda4c0f3a0 a2=28 a3=0 items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.063000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.063000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.063000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffda4c0f2b0 a2=28 a3=0 items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.063000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.063000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.063000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffda4c0f3c0 a2=28 a3=0 items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.063000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.063000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.063000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffda4c0f3a0 a2=28 a3=0 items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.063000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.063000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.063000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffda4c0f390 a2=28 a3=0 items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.063000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.065000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.065000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffda4c0f3c0 a2=28 a3=0 items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.065000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.065000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.065000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffda4c0f3a0 a2=28 a3=0 items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.065000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.065000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.065000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffda4c0f3c0 a2=28 a3=0 items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.065000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.065000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.065000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffda4c0f390 a2=28 a3=0 items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.065000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.065000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.065000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffda4c0f400 a2=28 a3=0 items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.065000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.065000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.065000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.065000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.065000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.065000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.065000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.065000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.065000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.065000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.065000 audit: BPF prog-id=21 op=LOAD Dec 13 14:31:22.065000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffda4c0f270 a2=40 a3=0 items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.065000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.065000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:31:22.065000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.065000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffda4c0f260 a2=50 a3=2800 items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.065000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffda4c0f260 a2=50 a3=2800 items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.066000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit: BPF prog-id=22 op=LOAD Dec 13 14:31:22.066000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffda4c0ea80 a2=94 a3=2 items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.066000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.066000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { perfmon } for pid=4028 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit[4028]: AVC avc: denied { bpf } for pid=4028 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.066000 audit: BPF prog-id=23 op=LOAD Dec 13 14:31:22.066000 audit[4028]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffda4c0eb80 a2=94 a3=2d items=0 ppid=3867 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.066000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit: BPF prog-id=24 op=LOAD Dec 13 14:31:22.069000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb932c100 a2=98 a3=0 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.069000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit: BPF prog-id=25 op=LOAD Dec 13 14:31:22.069000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffb932bee0 a2=74 a3=540051 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.069000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.069000 audit: BPF prog-id=26 op=LOAD Dec 13 14:31:22.069000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffb932bf10 a2=94 a3=2 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.069000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:31:22.129104 env[1521]: 2024-12-13 14:31:22.042 [INFO][4005] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Dec 13 14:31:22.129104 env[1521]: 2024-12-13 14:31:22.042 [INFO][4005] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" iface="eth0" netns="/var/run/netns/cni-530f9ad3-50ba-621b-4300-60d39424bbea" Dec 13 14:31:22.129104 env[1521]: 2024-12-13 14:31:22.042 [INFO][4005] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" iface="eth0" netns="/var/run/netns/cni-530f9ad3-50ba-621b-4300-60d39424bbea" Dec 13 14:31:22.129104 env[1521]: 2024-12-13 14:31:22.043 [INFO][4005] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" iface="eth0" netns="/var/run/netns/cni-530f9ad3-50ba-621b-4300-60d39424bbea" Dec 13 14:31:22.129104 env[1521]: 2024-12-13 14:31:22.043 [INFO][4005] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Dec 13 14:31:22.129104 env[1521]: 2024-12-13 14:31:22.043 [INFO][4005] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Dec 13 14:31:22.129104 env[1521]: 2024-12-13 14:31:22.107 [INFO][4020] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" HandleID="k8s-pod-network.41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" Dec 13 14:31:22.129104 env[1521]: 2024-12-13 14:31:22.107 [INFO][4020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:22.129104 env[1521]: 2024-12-13 14:31:22.107 [INFO][4020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:22.129104 env[1521]: 2024-12-13 14:31:22.121 [WARNING][4020] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" HandleID="k8s-pod-network.41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" Dec 13 14:31:22.129104 env[1521]: 2024-12-13 14:31:22.121 [INFO][4020] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" HandleID="k8s-pod-network.41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" Dec 13 14:31:22.129104 env[1521]: 2024-12-13 14:31:22.123 [INFO][4020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:22.129104 env[1521]: 2024-12-13 14:31:22.127 [INFO][4005] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Dec 13 14:31:22.129104 env[1521]: time="2024-12-13T14:31:22.128908052Z" level=info msg="TearDown network for sandbox \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\" successfully" Dec 13 14:31:22.129104 env[1521]: time="2024-12-13T14:31:22.128948452Z" level=info msg="StopPodSandbox for \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\" returns successfully" Dec 13 14:31:22.135517 systemd[1]: run-netns-cni\x2d530f9ad3\x2d50ba\x2d621b\x2d4300\x2d60d39424bbea.mount: Deactivated successfully. Dec 13 14:31:22.138078 env[1521]: time="2024-12-13T14:31:22.137994377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79547c4d58-hf5ns,Uid:b609b60d-e7a7-460b-9ba5-524f51fccf87,Namespace:calico-apiserver,Attempt:1,}" Dec 13 14:31:22.264000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.264000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.264000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.264000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.264000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.264000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.264000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.264000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.264000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.264000 audit: BPF prog-id=27 op=LOAD Dec 13 14:31:22.264000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffb932bdd0 a2=40 a3=1 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.264000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.264000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:31:22.264000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.264000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fffb932bea0 a2=50 a3=7fffb932bf80 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.264000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.277000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.277000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffb932bde0 a2=28 a3=0 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.277000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffb932be10 a2=28 a3=0 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.284000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffb932bd20 a2=28 a3=0 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.284000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffb932be30 a2=28 a3=0 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.284000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffb932be10 a2=28 a3=0 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.284000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffb932be00 a2=28 a3=0 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.284000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffb932be30 a2=28 a3=0 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.284000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffb932be10 a2=28 a3=0 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.284000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffb932be30 a2=28 a3=0 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.284000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffb932be00 a2=28 a3=0 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.284000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffb932be70 a2=28 a3=0 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.284000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffb932bc20 a2=50 a3=1 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.284000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit: BPF prog-id=28 op=LOAD Dec 13 14:31:22.284000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffb932bc20 a2=94 a3=5 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.284000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.284000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffb932bcd0 a2=50 a3=1 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.284000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fffb932bdf0 a2=4 a3=38 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.284000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.284000 audit[4032]: AVC avc: denied { confidentiality } for pid=4032 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:31:22.284000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffb932be40 a2=94 a3=6 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.284000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { confidentiality } for pid=4032 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:31:22.285000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffb932b5f0 a2=94 a3=83 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.285000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { perfmon } for pid=4032 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.285000 audit[4032]: AVC avc: denied { confidentiality } for pid=4032 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:31:22.285000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffb932b5f0 a2=94 a3=83 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.285000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.286000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.286000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffb932d030 a2=10 a3=f1f00800 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.286000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.286000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.286000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffb932ced0 a2=10 a3=3 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.286000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.286000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.286000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffb932ce70 a2=10 a3=3 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.286000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.286000 audit[4032]: AVC avc: denied { bpf } for pid=4032 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:31:22.286000 audit[4032]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffb932ce70 a2=10 a3=7 items=0 ppid=3867 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.286000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:31:22.300000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:31:22.369080 systemd-networkd[1701]: calic5f53f95e44: Link UP Dec 13 14:31:22.378564 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic5f53f95e44: link becomes ready Dec 13 14:31:22.378967 systemd-networkd[1701]: calic5f53f95e44: Gained carrier Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.272 [INFO][4039] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0 calico-apiserver-79547c4d58- calico-apiserver b609b60d-e7a7-460b-9ba5-524f51fccf87 765 0 2024-12-13 14:30:54 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79547c4d58 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.6-a-e445ccd8ad calico-apiserver-79547c4d58-hf5ns eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic5f53f95e44 [] []}} ContainerID="0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" Namespace="calico-apiserver" Pod="calico-apiserver-79547c4d58-hf5ns" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-" Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.272 [INFO][4039] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" Namespace="calico-apiserver" Pod="calico-apiserver-79547c4d58-hf5ns" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.316 [INFO][4050] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" HandleID="k8s-pod-network.0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.332 [INFO][4050] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" HandleID="k8s-pod-network.0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310a90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.6-a-e445ccd8ad", "pod":"calico-apiserver-79547c4d58-hf5ns", "timestamp":"2024-12-13 14:31:22.316138184 +0000 UTC"}, Hostname:"ci-3510.3.6-a-e445ccd8ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.332 [INFO][4050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.332 [INFO][4050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.332 [INFO][4050] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.6-a-e445ccd8ad' Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.334 [INFO][4050] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.337 [INFO][4050] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.340 [INFO][4050] ipam/ipam.go 489: Trying affinity for 192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.341 [INFO][4050] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.343 [INFO][4050] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.343 [INFO][4050] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.344 [INFO][4050] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.352 [INFO][4050] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.358 [INFO][4050] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.193/26] block=192.168.89.192/26 handle="k8s-pod-network.0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.358 [INFO][4050] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.193/26] handle="k8s-pod-network.0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.358 [INFO][4050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:22.397933 env[1521]: 2024-12-13 14:31:22.358 [INFO][4050] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.193/26] IPv6=[] ContainerID="0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" HandleID="k8s-pod-network.0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" Dec 13 14:31:22.398678 env[1521]: 2024-12-13 14:31:22.360 [INFO][4039] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" Namespace="calico-apiserver" Pod="calico-apiserver-79547c4d58-hf5ns" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0", GenerateName:"calico-apiserver-79547c4d58-", Namespace:"calico-apiserver", SelfLink:"", UID:"b609b60d-e7a7-460b-9ba5-524f51fccf87", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79547c4d58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"", Pod:"calico-apiserver-79547c4d58-hf5ns", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5f53f95e44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:22.398678 env[1521]: 2024-12-13 14:31:22.360 [INFO][4039] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.193/32] ContainerID="0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" Namespace="calico-apiserver" Pod="calico-apiserver-79547c4d58-hf5ns" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" Dec 13 14:31:22.398678 env[1521]: 2024-12-13 14:31:22.360 [INFO][4039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5f53f95e44 ContainerID="0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" Namespace="calico-apiserver" Pod="calico-apiserver-79547c4d58-hf5ns" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" Dec 13 14:31:22.398678 env[1521]: 2024-12-13 14:31:22.383 [INFO][4039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" Namespace="calico-apiserver" Pod="calico-apiserver-79547c4d58-hf5ns" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" Dec 13 14:31:22.398678 env[1521]: 2024-12-13 14:31:22.384 [INFO][4039] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" Namespace="calico-apiserver" Pod="calico-apiserver-79547c4d58-hf5ns" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0", GenerateName:"calico-apiserver-79547c4d58-", Namespace:"calico-apiserver", SelfLink:"", UID:"b609b60d-e7a7-460b-9ba5-524f51fccf87", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79547c4d58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac", Pod:"calico-apiserver-79547c4d58-hf5ns", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5f53f95e44", MAC:"56:20:6e:2b:5c:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:22.398678 env[1521]: 2024-12-13 14:31:22.396 [INFO][4039] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac" Namespace="calico-apiserver" Pod="calico-apiserver-79547c4d58-hf5ns" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" Dec 13 14:31:22.414237 env[1521]: time="2024-12-13T14:31:22.414176362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:22.414237 env[1521]: time="2024-12-13T14:31:22.414211362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:22.414421 env[1521]: time="2024-12-13T14:31:22.414224862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:22.414616 env[1521]: time="2024-12-13T14:31:22.414579763Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac pid=4089 runtime=io.containerd.runc.v2 Dec 13 14:31:22.496855 env[1521]: time="2024-12-13T14:31:22.496801297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79547c4d58-hf5ns,Uid:b609b60d-e7a7-460b-9ba5-524f51fccf87,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac\"" Dec 13 14:31:22.502000 audit[4126]: NETFILTER_CFG table=mangle:100 family=2 entries=16 op=nft_register_chain pid=4126 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:31:22.502000 audit[4126]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff69719110 a2=0 a3=7fff697190fc items=0 ppid=3867 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.502000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:31:22.506925 env[1521]: time="2024-12-13T14:31:22.506902325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 14:31:22.548000 audit[4135]: NETFILTER_CFG table=nat:101 family=2 entries=15 op=nft_register_chain pid=4135 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:31:22.548000 audit[4135]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffff8134f20 a2=0 a3=7ffff8134f0c items=0 ppid=3867 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.548000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:31:22.548000 audit[4128]: NETFILTER_CFG table=filter:102 family=2 entries=39 op=nft_register_chain pid=4128 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:31:22.548000 audit[4128]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffd7b157a90 a2=0 a3=7ffd7b157a7c items=0 ppid=3867 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.548000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:31:22.559000 audit[4134]: NETFILTER_CFG table=raw:103 family=2 entries=21 op=nft_register_chain pid=4134 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:31:22.559000 audit[4134]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd6f35d1c0 a2=0 a3=7ffd6f35d1ac items=0 ppid=3867 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.559000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:31:22.579000 audit[4147]: NETFILTER_CFG table=filter:104 family=2 entries=40 op=nft_register_chain pid=4147 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:31:22.579000 audit[4147]: SYSCALL arch=c000003e syscall=46 success=yes exit=23492 a0=3 a1=7ffccd6f3290 a2=0 a3=7ffccd6f327c items=0 ppid=3867 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:22.579000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:31:22.977534 env[1521]: time="2024-12-13T14:31:22.977315562Z" level=info msg="StopPodSandbox for \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\"" Dec 13 14:31:23.050501 env[1521]: 2024-12-13 14:31:23.021 [INFO][4161] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Dec 13 14:31:23.050501 env[1521]: 2024-12-13 14:31:23.021 [INFO][4161] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" iface="eth0" netns="/var/run/netns/cni-632ea1dd-7e95-e0de-56a8-b60bce386dad" Dec 13 14:31:23.050501 env[1521]: 2024-12-13 14:31:23.022 [INFO][4161] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" iface="eth0" netns="/var/run/netns/cni-632ea1dd-7e95-e0de-56a8-b60bce386dad" Dec 13 14:31:23.050501 env[1521]: 2024-12-13 14:31:23.022 [INFO][4161] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" iface="eth0" netns="/var/run/netns/cni-632ea1dd-7e95-e0de-56a8-b60bce386dad" Dec 13 14:31:23.050501 env[1521]: 2024-12-13 14:31:23.022 [INFO][4161] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Dec 13 14:31:23.050501 env[1521]: 2024-12-13 14:31:23.022 [INFO][4161] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Dec 13 14:31:23.050501 env[1521]: 2024-12-13 14:31:23.042 [INFO][4167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" HandleID="k8s-pod-network.53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" Dec 13 14:31:23.050501 env[1521]: 2024-12-13 14:31:23.042 [INFO][4167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:23.050501 env[1521]: 2024-12-13 14:31:23.042 [INFO][4167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:23.050501 env[1521]: 2024-12-13 14:31:23.047 [WARNING][4167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" HandleID="k8s-pod-network.53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" Dec 13 14:31:23.050501 env[1521]: 2024-12-13 14:31:23.047 [INFO][4167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" HandleID="k8s-pod-network.53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" Dec 13 14:31:23.050501 env[1521]: 2024-12-13 14:31:23.048 [INFO][4167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:23.050501 env[1521]: 2024-12-13 14:31:23.049 [INFO][4161] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Dec 13 14:31:23.051418 env[1521]: time="2024-12-13T14:31:23.051367871Z" level=info msg="TearDown network for sandbox \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\" successfully" Dec 13 14:31:23.051522 env[1521]: time="2024-12-13T14:31:23.051418171Z" level=info msg="StopPodSandbox for \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\" returns successfully" Dec 13 14:31:23.052270 env[1521]: time="2024-12-13T14:31:23.052237573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547bf79797-klbv4,Uid:8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6,Namespace:calico-system,Attempt:1,}" Dec 13 14:31:23.134843 systemd[1]: run-netns-cni\x2d632ea1dd\x2d7e95\x2de0de\x2d56a8\x2db60bce386dad.mount: Deactivated successfully. Dec 13 14:31:23.205475 systemd-networkd[1701]: cali11f205dbd28: Link UP Dec 13 14:31:23.216608 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:31:23.217049 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali11f205dbd28: link becomes ready Dec 13 14:31:23.217732 systemd-networkd[1701]: cali11f205dbd28: Gained carrier Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.127 [INFO][4174] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0 calico-kube-controllers-547bf79797- calico-system 8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6 773 0 2024-12-13 14:30:54 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:547bf79797 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.6-a-e445ccd8ad calico-kube-controllers-547bf79797-klbv4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali11f205dbd28 [] []}} ContainerID="7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" Namespace="calico-system" Pod="calico-kube-controllers-547bf79797-klbv4" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-" Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.128 [INFO][4174] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" Namespace="calico-system" Pod="calico-kube-controllers-547bf79797-klbv4" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.166 [INFO][4185] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" HandleID="k8s-pod-network.7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.173 [INFO][4185] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" HandleID="k8s-pod-network.7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.6-a-e445ccd8ad", "pod":"calico-kube-controllers-547bf79797-klbv4", "timestamp":"2024-12-13 14:31:23.166121793 +0000 UTC"}, Hostname:"ci-3510.3.6-a-e445ccd8ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.173 [INFO][4185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.173 [INFO][4185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.173 [INFO][4185] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.6-a-e445ccd8ad' Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.175 [INFO][4185] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.178 [INFO][4185] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.182 [INFO][4185] ipam/ipam.go 489: Trying affinity for 192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.183 [INFO][4185] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.185 [INFO][4185] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.185 [INFO][4185] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.186 [INFO][4185] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440 Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.193 [INFO][4185] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.200 [INFO][4185] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.194/26] block=192.168.89.192/26 handle="k8s-pod-network.7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.200 [INFO][4185] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.194/26] handle="k8s-pod-network.7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.201 [INFO][4185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:23.233399 env[1521]: 2024-12-13 14:31:23.201 [INFO][4185] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.194/26] IPv6=[] ContainerID="7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" HandleID="k8s-pod-network.7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" Dec 13 14:31:23.236597 env[1521]: 2024-12-13 14:31:23.203 [INFO][4174] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" Namespace="calico-system" Pod="calico-kube-controllers-547bf79797-klbv4" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0", GenerateName:"calico-kube-controllers-547bf79797-", Namespace:"calico-system", SelfLink:"", UID:"8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"547bf79797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"", Pod:"calico-kube-controllers-547bf79797-klbv4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11f205dbd28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:23.236597 env[1521]: 2024-12-13 14:31:23.203 [INFO][4174] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.194/32] ContainerID="7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" Namespace="calico-system" Pod="calico-kube-controllers-547bf79797-klbv4" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" Dec 13 14:31:23.236597 env[1521]: 2024-12-13 14:31:23.203 [INFO][4174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11f205dbd28 ContainerID="7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" Namespace="calico-system" Pod="calico-kube-controllers-547bf79797-klbv4" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" Dec 13 14:31:23.236597 env[1521]: 2024-12-13 14:31:23.218 [INFO][4174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" Namespace="calico-system" Pod="calico-kube-controllers-547bf79797-klbv4" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" Dec 13 14:31:23.236597 env[1521]: 2024-12-13 14:31:23.219 [INFO][4174] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" Namespace="calico-system" Pod="calico-kube-controllers-547bf79797-klbv4" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0", GenerateName:"calico-kube-controllers-547bf79797-", Namespace:"calico-system", SelfLink:"", UID:"8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"547bf79797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440", Pod:"calico-kube-controllers-547bf79797-klbv4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11f205dbd28", MAC:"ca:46:95:d5:8e:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:23.236597 env[1521]: 2024-12-13 14:31:23.231 [INFO][4174] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440" Namespace="calico-system" Pod="calico-kube-controllers-547bf79797-klbv4" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" Dec 13 14:31:23.251000 audit[4203]: NETFILTER_CFG table=filter:105 family=2 entries=38 op=nft_register_chain pid=4203 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:31:23.251000 audit[4203]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffe5510da10 a2=0 a3=7ffe5510d9fc items=0 ppid=3867 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:23.251000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:31:23.258559 env[1521]: time="2024-12-13T14:31:23.258222452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:23.258559 env[1521]: time="2024-12-13T14:31:23.258275553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:23.258559 env[1521]: time="2024-12-13T14:31:23.258296053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:23.258559 env[1521]: time="2024-12-13T14:31:23.258518153Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440 pid=4212 runtime=io.containerd.runc.v2 Dec 13 14:31:23.286600 systemd[1]: run-containerd-runc-k8s.io-7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440-runc.1cjUYj.mount: Deactivated successfully. Dec 13 14:31:23.332583 env[1521]: time="2024-12-13T14:31:23.332535561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547bf79797-klbv4,Uid:8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6,Namespace:calico-system,Attempt:1,} returns sandbox id \"7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440\"" Dec 13 14:31:23.979266 env[1521]: time="2024-12-13T14:31:23.979217680Z" level=info msg="StopPodSandbox for \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\"" Dec 13 14:31:24.036843 systemd-networkd[1701]: vxlan.calico: Gained IPv6LL Dec 13 14:31:24.079091 env[1521]: 2024-12-13 14:31:24.040 [INFO][4264] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Dec 13 14:31:24.079091 env[1521]: 2024-12-13 14:31:24.041 [INFO][4264] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" iface="eth0" netns="/var/run/netns/cni-70881ea6-bae6-0566-efb5-74e136d0ae44" Dec 13 14:31:24.079091 env[1521]: 2024-12-13 14:31:24.041 [INFO][4264] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" iface="eth0" netns="/var/run/netns/cni-70881ea6-bae6-0566-efb5-74e136d0ae44" Dec 13 14:31:24.079091 env[1521]: 2024-12-13 14:31:24.041 [INFO][4264] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" iface="eth0" netns="/var/run/netns/cni-70881ea6-bae6-0566-efb5-74e136d0ae44" Dec 13 14:31:24.079091 env[1521]: 2024-12-13 14:31:24.041 [INFO][4264] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Dec 13 14:31:24.079091 env[1521]: 2024-12-13 14:31:24.041 [INFO][4264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Dec 13 14:31:24.079091 env[1521]: 2024-12-13 14:31:24.066 [INFO][4270] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" HandleID="k8s-pod-network.228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" Dec 13 14:31:24.079091 env[1521]: 2024-12-13 14:31:24.066 [INFO][4270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:24.079091 env[1521]: 2024-12-13 14:31:24.066 [INFO][4270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:24.079091 env[1521]: 2024-12-13 14:31:24.074 [WARNING][4270] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" HandleID="k8s-pod-network.228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" Dec 13 14:31:24.079091 env[1521]: 2024-12-13 14:31:24.074 [INFO][4270] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" HandleID="k8s-pod-network.228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" Dec 13 14:31:24.079091 env[1521]: 2024-12-13 14:31:24.076 [INFO][4270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:24.079091 env[1521]: 2024-12-13 14:31:24.077 [INFO][4264] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Dec 13 14:31:24.080066 env[1521]: time="2024-12-13T14:31:24.080015262Z" level=info msg="TearDown network for sandbox \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\" successfully" Dec 13 14:31:24.080197 env[1521]: time="2024-12-13T14:31:24.080176362Z" level=info msg="StopPodSandbox for \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\" returns successfully" Dec 13 14:31:24.080995 env[1521]: time="2024-12-13T14:31:24.080966664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-889kf,Uid:54fdb1bd-71c2-40c0-a409-956b5f88cc85,Namespace:calico-system,Attempt:1,}" Dec 13 14:31:24.084901 systemd[1]: run-netns-cni\x2d70881ea6\x2dbae6\x2d0566\x2defb5\x2d74e136d0ae44.mount: Deactivated successfully. Dec 13 14:31:24.249997 systemd-networkd[1701]: caliaa8ff3993d5: Link UP Dec 13 14:31:24.255601 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:31:24.255697 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliaa8ff3993d5: link becomes ready Dec 13 14:31:24.256203 systemd-networkd[1701]: caliaa8ff3993d5: Gained carrier Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.154 [INFO][4278] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0 csi-node-driver- calico-system 54fdb1bd-71c2-40c0-a409-956b5f88cc85 780 0 2024-12-13 14:30:54 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.6-a-e445ccd8ad csi-node-driver-889kf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaa8ff3993d5 [] []}} ContainerID="701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" Namespace="calico-system" Pod="csi-node-driver-889kf" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-" Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.154 [INFO][4278] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" Namespace="calico-system" Pod="csi-node-driver-889kf" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.199 [INFO][4288] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" HandleID="k8s-pod-network.701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.206 [INFO][4288] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" HandleID="k8s-pod-network.701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000308ed0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.6-a-e445ccd8ad", "pod":"csi-node-driver-889kf", "timestamp":"2024-12-13 14:31:24.199009793 +0000 UTC"}, Hostname:"ci-3510.3.6-a-e445ccd8ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.206 [INFO][4288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.206 [INFO][4288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.206 [INFO][4288] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.6-a-e445ccd8ad' Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.208 [INFO][4288] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.211 [INFO][4288] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.218 [INFO][4288] ipam/ipam.go 489: Trying affinity for 192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.220 [INFO][4288] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.223 [INFO][4288] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.223 [INFO][4288] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.225 [INFO][4288] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462 Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.230 [INFO][4288] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.238 [INFO][4288] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.195/26] block=192.168.89.192/26 handle="k8s-pod-network.701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.238 [INFO][4288] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.195/26] handle="k8s-pod-network.701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.238 [INFO][4288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:24.283058 env[1521]: 2024-12-13 14:31:24.238 [INFO][4288] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.195/26] IPv6=[] ContainerID="701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" HandleID="k8s-pod-network.701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" Dec 13 14:31:24.284063 env[1521]: 2024-12-13 14:31:24.241 [INFO][4278] cni-plugin/k8s.go 386: Populated endpoint ContainerID="701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" Namespace="calico-system" Pod="csi-node-driver-889kf" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"54fdb1bd-71c2-40c0-a409-956b5f88cc85", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"", Pod:"csi-node-driver-889kf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaa8ff3993d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:24.284063 env[1521]: 2024-12-13 14:31:24.241 [INFO][4278] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.195/32] ContainerID="701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" Namespace="calico-system" Pod="csi-node-driver-889kf" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" Dec 13 14:31:24.284063 env[1521]: 2024-12-13 14:31:24.241 [INFO][4278] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa8ff3993d5 ContainerID="701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" Namespace="calico-system" Pod="csi-node-driver-889kf" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" Dec 13 14:31:24.284063 env[1521]: 2024-12-13 14:31:24.256 [INFO][4278] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" Namespace="calico-system" Pod="csi-node-driver-889kf" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" Dec 13 14:31:24.284063 env[1521]: 2024-12-13 14:31:24.265 [INFO][4278] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" Namespace="calico-system" Pod="csi-node-driver-889kf" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"54fdb1bd-71c2-40c0-a409-956b5f88cc85", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462", Pod:"csi-node-driver-889kf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaa8ff3993d5", MAC:"6e:4e:b3:cd:9e:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:24.284063 env[1521]: 2024-12-13 14:31:24.280 [INFO][4278] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462" Namespace="calico-system" Pod="csi-node-driver-889kf" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" Dec 13 14:31:24.299000 audit[4308]: NETFILTER_CFG table=filter:106 family=2 entries=38 op=nft_register_chain pid=4308 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:31:24.299000 audit[4308]: SYSCALL arch=c000003e syscall=46 success=yes exit=19828 a0=3 a1=7fff952db4c0 a2=0 a3=7fff952db4ac items=0 ppid=3867 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:24.299000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:31:24.323708 env[1521]: time="2024-12-13T14:31:24.311981108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:24.323708 env[1521]: time="2024-12-13T14:31:24.312018008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:24.323708 env[1521]: time="2024-12-13T14:31:24.312027608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:24.323708 env[1521]: time="2024-12-13T14:31:24.312148208Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462 pid=4317 runtime=io.containerd.runc.v2 Dec 13 14:31:24.349981 systemd[1]: run-containerd-runc-k8s.io-701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462-runc.qTlyjq.mount: Deactivated successfully. Dec 13 14:31:24.355876 systemd-networkd[1701]: calic5f53f95e44: Gained IPv6LL Dec 13 14:31:24.386767 env[1521]: time="2024-12-13T14:31:24.386698416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-889kf,Uid:54fdb1bd-71c2-40c0-a409-956b5f88cc85,Namespace:calico-system,Attempt:1,} returns sandbox id \"701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462\"" Dec 13 14:31:24.995855 systemd-networkd[1701]: cali11f205dbd28: Gained IPv6LL Dec 13 14:31:25.764064 systemd-networkd[1701]: caliaa8ff3993d5: Gained IPv6LL Dec 13 14:31:25.808740 env[1521]: time="2024-12-13T14:31:25.808658256Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:25.816303 env[1521]: time="2024-12-13T14:31:25.816266277Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:25.819804 env[1521]: time="2024-12-13T14:31:25.819775286Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:25.823800 env[1521]: time="2024-12-13T14:31:25.823768197Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:25.824285 env[1521]: time="2024-12-13T14:31:25.824253999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 14:31:25.825343 env[1521]: time="2024-12-13T14:31:25.825316602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 14:31:25.827795 env[1521]: time="2024-12-13T14:31:25.827765208Z" level=info msg="CreateContainer within sandbox \"0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 14:31:25.865520 env[1521]: time="2024-12-13T14:31:25.865471512Z" level=info msg="CreateContainer within sandbox \"0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"81d5cb90686ff0b47ce29c0e761598daddce6d42e52f0c230ef55c12de9d53d8\"" Dec 13 14:31:25.867362 env[1521]: time="2024-12-13T14:31:25.866232015Z" level=info msg="StartContainer for \"81d5cb90686ff0b47ce29c0e761598daddce6d42e52f0c230ef55c12de9d53d8\"" Dec 13 14:31:25.954235 env[1521]: time="2024-12-13T14:31:25.954194157Z" level=info msg="StartContainer for \"81d5cb90686ff0b47ce29c0e761598daddce6d42e52f0c230ef55c12de9d53d8\" returns successfully" Dec 13 14:31:25.984874 env[1521]: time="2024-12-13T14:31:25.984834042Z" level=info msg="StopPodSandbox for \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\"" Dec 13 14:31:25.992472 env[1521]: time="2024-12-13T14:31:25.992431963Z" level=info msg="StopPodSandbox for \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\"" Dec 13 14:31:26.190809 env[1521]: 2024-12-13 14:31:26.108 [INFO][4409] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Dec 13 14:31:26.190809 env[1521]: 2024-12-13 14:31:26.109 [INFO][4409] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" iface="eth0" netns="/var/run/netns/cni-ce8f0b6e-097b-d057-fb7a-c55481c8ed00" Dec 13 14:31:26.190809 env[1521]: 2024-12-13 14:31:26.109 [INFO][4409] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" iface="eth0" netns="/var/run/netns/cni-ce8f0b6e-097b-d057-fb7a-c55481c8ed00" Dec 13 14:31:26.190809 env[1521]: 2024-12-13 14:31:26.109 [INFO][4409] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" iface="eth0" netns="/var/run/netns/cni-ce8f0b6e-097b-d057-fb7a-c55481c8ed00" Dec 13 14:31:26.190809 env[1521]: 2024-12-13 14:31:26.109 [INFO][4409] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Dec 13 14:31:26.190809 env[1521]: 2024-12-13 14:31:26.109 [INFO][4409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Dec 13 14:31:26.190809 env[1521]: 2024-12-13 14:31:26.171 [INFO][4426] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" HandleID="k8s-pod-network.6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" Dec 13 14:31:26.190809 env[1521]: 2024-12-13 14:31:26.172 [INFO][4426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:26.190809 env[1521]: 2024-12-13 14:31:26.172 [INFO][4426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:26.190809 env[1521]: 2024-12-13 14:31:26.181 [WARNING][4426] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" HandleID="k8s-pod-network.6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" Dec 13 14:31:26.190809 env[1521]: 2024-12-13 14:31:26.181 [INFO][4426] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" HandleID="k8s-pod-network.6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" Dec 13 14:31:26.190809 env[1521]: 2024-12-13 14:31:26.183 [INFO][4426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:26.190809 env[1521]: 2024-12-13 14:31:26.185 [INFO][4409] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Dec 13 14:31:26.195099 env[1521]: time="2024-12-13T14:31:26.195043617Z" level=info msg="TearDown network for sandbox \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\" successfully" Dec 13 14:31:26.195264 env[1521]: time="2024-12-13T14:31:26.195237317Z" level=info msg="StopPodSandbox for \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\" returns successfully" Dec 13 14:31:26.196198 systemd[1]: run-netns-cni\x2dce8f0b6e\x2d097b\x2dd057\x2dfb7a\x2dc55481c8ed00.mount: Deactivated successfully. Dec 13 14:31:26.198470 env[1521]: time="2024-12-13T14:31:26.198425326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f5p2k,Uid:91fc512f-0007-48c4-b73e-c4c21c09d9e8,Namespace:kube-system,Attempt:1,}" Dec 13 14:31:26.243839 env[1521]: 2024-12-13 14:31:26.124 [INFO][4417] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Dec 13 14:31:26.243839 env[1521]: 2024-12-13 14:31:26.124 [INFO][4417] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" iface="eth0" netns="/var/run/netns/cni-e105757c-7cfa-26ec-da3b-34dbb964ef68" Dec 13 14:31:26.243839 env[1521]: 2024-12-13 14:31:26.124 [INFO][4417] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" iface="eth0" netns="/var/run/netns/cni-e105757c-7cfa-26ec-da3b-34dbb964ef68" Dec 13 14:31:26.243839 env[1521]: 2024-12-13 14:31:26.125 [INFO][4417] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" iface="eth0" netns="/var/run/netns/cni-e105757c-7cfa-26ec-da3b-34dbb964ef68" Dec 13 14:31:26.243839 env[1521]: 2024-12-13 14:31:26.125 [INFO][4417] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Dec 13 14:31:26.243839 env[1521]: 2024-12-13 14:31:26.125 [INFO][4417] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Dec 13 14:31:26.243839 env[1521]: 2024-12-13 14:31:26.223 [INFO][4431] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" HandleID="k8s-pod-network.0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" Dec 13 14:31:26.243839 env[1521]: 2024-12-13 14:31:26.223 [INFO][4431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:26.243839 env[1521]: 2024-12-13 14:31:26.223 [INFO][4431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:26.243839 env[1521]: 2024-12-13 14:31:26.235 [WARNING][4431] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" HandleID="k8s-pod-network.0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" Dec 13 14:31:26.243839 env[1521]: 2024-12-13 14:31:26.235 [INFO][4431] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" HandleID="k8s-pod-network.0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" Dec 13 14:31:26.243839 env[1521]: 2024-12-13 14:31:26.238 [INFO][4431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:26.243839 env[1521]: 2024-12-13 14:31:26.240 [INFO][4417] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Dec 13 14:31:26.243839 env[1521]: time="2024-12-13T14:31:26.241567744Z" level=info msg="TearDown network for sandbox \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\" successfully" Dec 13 14:31:26.243839 env[1521]: time="2024-12-13T14:31:26.241609344Z" level=info msg="StopPodSandbox for \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\" returns successfully" Dec 13 14:31:26.243839 env[1521]: time="2024-12-13T14:31:26.242323946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mtlt2,Uid:fc5c0614-dc8c-44ff-bad4-378662704e03,Namespace:kube-system,Attempt:1,}" Dec 13 14:31:26.244826 systemd[1]: run-netns-cni\x2de105757c\x2d7cfa\x2d26ec\x2dda3b\x2d34dbb964ef68.mount: Deactivated successfully. Dec 13 14:31:26.257000 audit[4439]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=4439 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:26.257000 audit[4439]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffeea1375e0 a2=0 a3=7ffeea1375cc items=0 ppid=2867 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:26.257000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:26.262000 audit[4439]: NETFILTER_CFG table=nat:108 family=2 entries=14 op=nft_register_rule pid=4439 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:26.262000 audit[4439]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeea1375e0 a2=0 a3=0 items=0 ppid=2867 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:26.262000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:26.499923 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:31:26.500023 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic57a349aac0: link becomes ready Dec 13 14:31:26.490179 systemd-networkd[1701]: calic57a349aac0: Link UP Dec 13 14:31:26.504964 systemd-networkd[1701]: calic57a349aac0: Gained carrier Dec 13 14:31:26.532910 kubelet[2732]: I1213 14:31:26.532536 2732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79547c4d58-hf5ns" podStartSLOduration=29.212555458 podStartE2EDuration="32.532463139s" podCreationTimestamp="2024-12-13 14:30:54 +0000 UTC" firstStartedPulling="2024-12-13 14:31:22.504684519 +0000 UTC m=+54.652483674" lastFinishedPulling="2024-12-13 14:31:25.8245921 +0000 UTC m=+57.972391355" observedRunningTime="2024-12-13 14:31:26.229408011 +0000 UTC m=+58.377207166" watchObservedRunningTime="2024-12-13 14:31:26.532463139 +0000 UTC m=+58.680262394" Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.349 [INFO][4440] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0 coredns-76f75df574- kube-system 91fc512f-0007-48c4-b73e-c4c21c09d9e8 795 0 2024-12-13 14:30:42 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.6-a-e445ccd8ad coredns-76f75df574-f5p2k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic57a349aac0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" Namespace="kube-system" Pod="coredns-76f75df574-f5p2k" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-" Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.349 [INFO][4440] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" Namespace="kube-system" Pod="coredns-76f75df574-f5p2k" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.442 [INFO][4462] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" HandleID="k8s-pod-network.63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.454 [INFO][4462] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" HandleID="k8s-pod-network.63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310c80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.6-a-e445ccd8ad", "pod":"coredns-76f75df574-f5p2k", "timestamp":"2024-12-13 14:31:26.439914886 +0000 UTC"}, Hostname:"ci-3510.3.6-a-e445ccd8ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.454 [INFO][4462] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.454 [INFO][4462] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.454 [INFO][4462] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.6-a-e445ccd8ad' Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.456 [INFO][4462] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.460 [INFO][4462] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.464 [INFO][4462] ipam/ipam.go 489: Trying affinity for 192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.466 [INFO][4462] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.469 [INFO][4462] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.469 [INFO][4462] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.470 [INFO][4462] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863 Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.475 [INFO][4462] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.484 [INFO][4462] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.196/26] block=192.168.89.192/26 handle="k8s-pod-network.63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.484 [INFO][4462] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.196/26] handle="k8s-pod-network.63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.484 [INFO][4462] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:26.535155 env[1521]: 2024-12-13 14:31:26.484 [INFO][4462] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.196/26] IPv6=[] ContainerID="63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" HandleID="k8s-pod-network.63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" Dec 13 14:31:26.536064 env[1521]: 2024-12-13 14:31:26.486 [INFO][4440] cni-plugin/k8s.go 386: Populated endpoint ContainerID="63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" Namespace="kube-system" Pod="coredns-76f75df574-f5p2k" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"91fc512f-0007-48c4-b73e-c4c21c09d9e8", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"", Pod:"coredns-76f75df574-f5p2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic57a349aac0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:26.536064 env[1521]: 2024-12-13 14:31:26.486 [INFO][4440] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.196/32] ContainerID="63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" Namespace="kube-system" Pod="coredns-76f75df574-f5p2k" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" Dec 13 14:31:26.536064 env[1521]: 2024-12-13 14:31:26.487 [INFO][4440] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic57a349aac0 ContainerID="63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" Namespace="kube-system" Pod="coredns-76f75df574-f5p2k" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" Dec 13 14:31:26.536064 env[1521]: 2024-12-13 14:31:26.505 [INFO][4440] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" Namespace="kube-system" Pod="coredns-76f75df574-f5p2k" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" Dec 13 14:31:26.536064 env[1521]: 2024-12-13 14:31:26.507 [INFO][4440] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" Namespace="kube-system" Pod="coredns-76f75df574-f5p2k" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"91fc512f-0007-48c4-b73e-c4c21c09d9e8", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863", Pod:"coredns-76f75df574-f5p2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic57a349aac0", MAC:"36:77:99:95:bb:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:26.536064 env[1521]: 2024-12-13 14:31:26.533 [INFO][4440] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863" Namespace="kube-system" Pod="coredns-76f75df574-f5p2k" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" Dec 13 14:31:26.574569 env[1521]: time="2024-12-13T14:31:26.574038253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:26.574569 env[1521]: time="2024-12-13T14:31:26.574076953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:26.574569 env[1521]: time="2024-12-13T14:31:26.574091453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:26.574569 env[1521]: time="2024-12-13T14:31:26.574221954Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863 pid=4498 runtime=io.containerd.runc.v2 Dec 13 14:31:26.584095 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia9030ca0f03: link becomes ready Dec 13 14:31:26.583100 systemd-networkd[1701]: calia9030ca0f03: Link UP Dec 13 14:31:26.583305 systemd-networkd[1701]: calia9030ca0f03: Gained carrier Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.421 [INFO][4450] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0 coredns-76f75df574- kube-system fc5c0614-dc8c-44ff-bad4-378662704e03 796 0 2024-12-13 14:30:42 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.6-a-e445ccd8ad coredns-76f75df574-mtlt2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia9030ca0f03 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" Namespace="kube-system" Pod="coredns-76f75df574-mtlt2" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-" Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.421 [INFO][4450] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" Namespace="kube-system" Pod="coredns-76f75df574-mtlt2" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.513 [INFO][4470] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" HandleID="k8s-pod-network.180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.534 [INFO][4470] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" HandleID="k8s-pod-network.180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003196a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.6-a-e445ccd8ad", "pod":"coredns-76f75df574-mtlt2", "timestamp":"2024-12-13 14:31:26.513835388 +0000 UTC"}, Hostname:"ci-3510.3.6-a-e445ccd8ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.534 [INFO][4470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.534 [INFO][4470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.534 [INFO][4470] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.6-a-e445ccd8ad' Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.536 [INFO][4470] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.541 [INFO][4470] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.547 [INFO][4470] ipam/ipam.go 489: Trying affinity for 192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.550 [INFO][4470] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.553 [INFO][4470] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.553 [INFO][4470] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.554 [INFO][4470] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7 Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.558 [INFO][4470] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.567 [INFO][4470] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.197/26] block=192.168.89.192/26 handle="k8s-pod-network.180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.567 [INFO][4470] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.197/26] handle="k8s-pod-network.180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.567 [INFO][4470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:26.604504 env[1521]: 2024-12-13 14:31:26.567 [INFO][4470] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.197/26] IPv6=[] ContainerID="180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" HandleID="k8s-pod-network.180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" Dec 13 14:31:26.606538 env[1521]: 2024-12-13 14:31:26.574 [INFO][4450] cni-plugin/k8s.go 386: Populated endpoint ContainerID="180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" Namespace="kube-system" Pod="coredns-76f75df574-mtlt2" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fc5c0614-dc8c-44ff-bad4-378662704e03", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"", Pod:"coredns-76f75df574-mtlt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia9030ca0f03", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:26.606538 env[1521]: 2024-12-13 14:31:26.574 [INFO][4450] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.197/32] ContainerID="180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" Namespace="kube-system" Pod="coredns-76f75df574-mtlt2" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" Dec 13 14:31:26.606538 env[1521]: 2024-12-13 14:31:26.574 [INFO][4450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9030ca0f03 ContainerID="180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" Namespace="kube-system" Pod="coredns-76f75df574-mtlt2" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" Dec 13 14:31:26.606538 env[1521]: 2024-12-13 14:31:26.582 [INFO][4450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" Namespace="kube-system" Pod="coredns-76f75df574-mtlt2" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" Dec 13 14:31:26.606538 env[1521]: 2024-12-13 14:31:26.582 [INFO][4450] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" Namespace="kube-system" Pod="coredns-76f75df574-mtlt2" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fc5c0614-dc8c-44ff-bad4-378662704e03", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7", Pod:"coredns-76f75df574-mtlt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia9030ca0f03", MAC:"6e:c7:a8:ac:f7:26", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:26.606538 env[1521]: 2024-12-13 14:31:26.601 [INFO][4450] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7" Namespace="kube-system" Pod="coredns-76f75df574-mtlt2" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" Dec 13 14:31:26.607000 audit[4512]: NETFILTER_CFG table=filter:109 family=2 entries=52 op=nft_register_chain pid=4512 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:31:26.616771 kernel: kauditd_printk_skb: 526 callbacks suppressed Dec 13 14:31:26.616852 kernel: audit: type=1325 audit(1734100286.607:407): table=filter:109 family=2 entries=52 op=nft_register_chain pid=4512 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:31:26.607000 audit[4512]: SYSCALL arch=c000003e syscall=46 success=yes exit=25564 a0=3 a1=7ffc21938e70 a2=0 a3=7ffc21938e5c items=0 ppid=3867 pid=4512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:26.656959 kernel: audit: type=1300 audit(1734100286.607:407): arch=c000003e syscall=46 success=yes exit=25564 a0=3 a1=7ffc21938e70 a2=0 a3=7ffc21938e5c items=0 ppid=3867 pid=4512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:26.607000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:31:26.686839 kernel: audit: type=1327 audit(1734100286.607:407): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:31:26.696000 audit[4539]: NETFILTER_CFG table=filter:110 family=2 entries=44 op=nft_register_chain pid=4539 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:31:26.710599 kernel: audit: type=1325 audit(1734100286.696:408): table=filter:110 family=2 entries=44 op=nft_register_chain pid=4539 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:31:26.696000 audit[4539]: SYSCALL arch=c000003e syscall=46 success=yes exit=22244 a0=3 a1=7fff97f7b250 a2=0 a3=7fff97f7b23c items=0 ppid=3867 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:26.737831 kernel: audit: type=1300 audit(1734100286.696:408): arch=c000003e syscall=46 success=yes exit=22244 a0=3 a1=7fff97f7b250 a2=0 a3=7fff97f7b23c items=0 ppid=3867 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:26.748517 env[1521]: time="2024-12-13T14:31:26.748455430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:26.748714 env[1521]: time="2024-12-13T14:31:26.748692031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:26.748847 env[1521]: time="2024-12-13T14:31:26.748824931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:26.749253 env[1521]: time="2024-12-13T14:31:26.749222232Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7 pid=4547 runtime=io.containerd.runc.v2 Dec 13 14:31:26.696000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:31:26.763919 env[1521]: time="2024-12-13T14:31:26.757244454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f5p2k,Uid:91fc512f-0007-48c4-b73e-c4c21c09d9e8,Namespace:kube-system,Attempt:1,} returns sandbox id \"63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863\"" Dec 13 14:31:26.763919 env[1521]: time="2024-12-13T14:31:26.761712666Z" level=info msg="CreateContainer within sandbox \"63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:31:26.767746 kernel: audit: type=1327 audit(1734100286.696:408): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:31:26.806243 env[1521]: time="2024-12-13T14:31:26.806194088Z" level=info msg="CreateContainer within sandbox \"63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fb81b8325443dd0635de011c2afd20d54b5119090ab8db85395472dd5e8e9682\"" Dec 13 14:31:26.808004 env[1521]: time="2024-12-13T14:31:26.807966393Z" level=info msg="StartContainer for \"fb81b8325443dd0635de011c2afd20d54b5119090ab8db85395472dd5e8e9682\"" Dec 13 14:31:26.898053 env[1521]: time="2024-12-13T14:31:26.898011239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mtlt2,Uid:fc5c0614-dc8c-44ff-bad4-378662704e03,Namespace:kube-system,Attempt:1,} returns sandbox id \"180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7\"" Dec 13 14:31:26.902642 env[1521]: time="2024-12-13T14:31:26.902602651Z" level=info msg="CreateContainer within sandbox \"180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:31:26.943747 env[1521]: time="2024-12-13T14:31:26.943422163Z" level=info msg="StartContainer for \"fb81b8325443dd0635de011c2afd20d54b5119090ab8db85395472dd5e8e9682\" returns successfully" Dec 13 14:31:26.946943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1981171777.mount: Deactivated successfully. Dec 13 14:31:26.962883 env[1521]: time="2024-12-13T14:31:26.962542115Z" level=info msg="CreateContainer within sandbox \"180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2210605f26a1b03a8231733ddd1f718df127ee83abcc4c84ef957937165163a7\"" Dec 13 14:31:26.964743 env[1521]: time="2024-12-13T14:31:26.963231417Z" level=info msg="StartContainer for \"2210605f26a1b03a8231733ddd1f718df127ee83abcc4c84ef957937165163a7\"" Dec 13 14:31:26.981681 env[1521]: time="2024-12-13T14:31:26.981640267Z" level=info msg="StopPodSandbox for \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\"" Dec 13 14:31:27.117128 env[1521]: time="2024-12-13T14:31:27.117072635Z" level=info msg="StartContainer for \"2210605f26a1b03a8231733ddd1f718df127ee83abcc4c84ef957937165163a7\" returns successfully" Dec 13 14:31:27.215745 kubelet[2732]: I1213 14:31:27.214877 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:31:27.234811 env[1521]: 2024-12-13 14:31:27.147 [INFO][4653] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Dec 13 14:31:27.234811 env[1521]: 2024-12-13 14:31:27.148 [INFO][4653] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" iface="eth0" netns="/var/run/netns/cni-c998f163-66db-751f-50cb-160a4ae4bb9b" Dec 13 14:31:27.234811 env[1521]: 2024-12-13 14:31:27.148 [INFO][4653] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" iface="eth0" netns="/var/run/netns/cni-c998f163-66db-751f-50cb-160a4ae4bb9b" Dec 13 14:31:27.234811 env[1521]: 2024-12-13 14:31:27.148 [INFO][4653] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" iface="eth0" netns="/var/run/netns/cni-c998f163-66db-751f-50cb-160a4ae4bb9b" Dec 13 14:31:27.234811 env[1521]: 2024-12-13 14:31:27.148 [INFO][4653] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Dec 13 14:31:27.234811 env[1521]: 2024-12-13 14:31:27.148 [INFO][4653] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Dec 13 14:31:27.234811 env[1521]: 2024-12-13 14:31:27.207 [INFO][4677] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" HandleID="k8s-pod-network.d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" Dec 13 14:31:27.234811 env[1521]: 2024-12-13 14:31:27.207 [INFO][4677] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:27.234811 env[1521]: 2024-12-13 14:31:27.207 [INFO][4677] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:27.234811 env[1521]: 2024-12-13 14:31:27.224 [WARNING][4677] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" HandleID="k8s-pod-network.d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" Dec 13 14:31:27.234811 env[1521]: 2024-12-13 14:31:27.225 [INFO][4677] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" HandleID="k8s-pod-network.d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" Dec 13 14:31:27.234811 env[1521]: 2024-12-13 14:31:27.232 [INFO][4677] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:27.234811 env[1521]: 2024-12-13 14:31:27.233 [INFO][4653] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Dec 13 14:31:27.239236 env[1521]: time="2024-12-13T14:31:27.239186566Z" level=info msg="TearDown network for sandbox \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\" successfully" Dec 13 14:31:27.239416 env[1521]: time="2024-12-13T14:31:27.239390466Z" level=info msg="StopPodSandbox for \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\" returns successfully" Dec 13 14:31:27.240309 env[1521]: time="2024-12-13T14:31:27.240276269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79547c4d58-vbfr7,Uid:5544211b-a287-4828-991d-868381eea812,Namespace:calico-apiserver,Attempt:1,}" Dec 13 14:31:27.259000 audit[4686]: NETFILTER_CFG table=filter:111 family=2 entries=16 op=nft_register_rule pid=4686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:27.274753 kernel: audit: type=1325 audit(1734100287.259:409): table=filter:111 family=2 entries=16 op=nft_register_rule pid=4686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:27.277130 kubelet[2732]: I1213 14:31:27.277091 2732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-mtlt2" podStartSLOduration=45.277043268 podStartE2EDuration="45.277043268s" podCreationTimestamp="2024-12-13 14:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:31:27.236549658 +0000 UTC m=+59.384348913" watchObservedRunningTime="2024-12-13 14:31:27.277043268 +0000 UTC m=+59.424842523" Dec 13 14:31:27.259000 audit[4686]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdf040f790 a2=0 a3=7ffdf040f77c items=0 ppid=2867 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:27.297754 kernel: audit: type=1300 audit(1734100287.259:409): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdf040f790 a2=0 a3=7ffdf040f77c items=0 ppid=2867 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:27.259000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:27.316857 kernel: audit: type=1327 audit(1734100287.259:409): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:27.277000 audit[4686]: NETFILTER_CFG table=nat:112 family=2 entries=14 op=nft_register_rule pid=4686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:27.343816 kernel: audit: type=1325 audit(1734100287.277:410): table=nat:112 family=2 entries=14 op=nft_register_rule pid=4686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:27.277000 audit[4686]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdf040f790 a2=0 a3=0 items=0 ppid=2867 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:27.277000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:27.349000 audit[4695]: NETFILTER_CFG table=filter:113 family=2 entries=13 op=nft_register_rule pid=4695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:27.349000 audit[4695]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd03ba5fc0 a2=0 a3=7ffd03ba5fac items=0 ppid=2867 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:27.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:27.357000 audit[4695]: NETFILTER_CFG table=nat:114 family=2 entries=35 op=nft_register_chain pid=4695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:27.357000 audit[4695]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd03ba5fc0 a2=0 a3=7ffd03ba5fac items=0 ppid=2867 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:27.357000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:27.556267 systemd-networkd[1701]: calie1425db48f8: Link UP Dec 13 14:31:27.570289 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:31:27.570389 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie1425db48f8: link becomes ready Dec 13 14:31:27.572285 systemd-networkd[1701]: calie1425db48f8: Gained carrier Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.432 [INFO][4689] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0 calico-apiserver-79547c4d58- calico-apiserver 5544211b-a287-4828-991d-868381eea812 815 0 2024-12-13 14:30:54 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79547c4d58 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.6-a-e445ccd8ad calico-apiserver-79547c4d58-vbfr7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie1425db48f8 [] []}} ContainerID="193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" Namespace="calico-apiserver" Pod="calico-apiserver-79547c4d58-vbfr7" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-" Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.432 [INFO][4689] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" Namespace="calico-apiserver" Pod="calico-apiserver-79547c4d58-vbfr7" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.498 [INFO][4703] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" HandleID="k8s-pod-network.193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.509 [INFO][4703] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" HandleID="k8s-pod-network.193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00029e9c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.6-a-e445ccd8ad", "pod":"calico-apiserver-79547c4d58-vbfr7", "timestamp":"2024-12-13 14:31:27.498630469 +0000 UTC"}, Hostname:"ci-3510.3.6-a-e445ccd8ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.509 [INFO][4703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.509 [INFO][4703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.509 [INFO][4703] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.6-a-e445ccd8ad' Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.510 [INFO][4703] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.515 [INFO][4703] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.519 [INFO][4703] ipam/ipam.go 489: Trying affinity for 192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.520 [INFO][4703] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.523 [INFO][4703] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.523 [INFO][4703] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.524 [INFO][4703] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.529 [INFO][4703] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.538 [INFO][4703] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.198/26] block=192.168.89.192/26 handle="k8s-pod-network.193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.538 [INFO][4703] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.198/26] handle="k8s-pod-network.193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" host="ci-3510.3.6-a-e445ccd8ad" Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.538 [INFO][4703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:27.588794 env[1521]: 2024-12-13 14:31:27.538 [INFO][4703] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.198/26] IPv6=[] ContainerID="193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" HandleID="k8s-pod-network.193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" Dec 13 14:31:27.589667 env[1521]: 2024-12-13 14:31:27.541 [INFO][4689] cni-plugin/k8s.go 386: Populated endpoint ContainerID="193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" Namespace="calico-apiserver" Pod="calico-apiserver-79547c4d58-vbfr7" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0", GenerateName:"calico-apiserver-79547c4d58-", Namespace:"calico-apiserver", SelfLink:"", UID:"5544211b-a287-4828-991d-868381eea812", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79547c4d58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"", Pod:"calico-apiserver-79547c4d58-vbfr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1425db48f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:27.589667 env[1521]: 2024-12-13 14:31:27.541 [INFO][4689] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.198/32] ContainerID="193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" Namespace="calico-apiserver" Pod="calico-apiserver-79547c4d58-vbfr7" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" Dec 13 14:31:27.589667 env[1521]: 2024-12-13 14:31:27.541 [INFO][4689] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1425db48f8 ContainerID="193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" Namespace="calico-apiserver" Pod="calico-apiserver-79547c4d58-vbfr7" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" Dec 13 14:31:27.589667 env[1521]: 2024-12-13 14:31:27.573 [INFO][4689] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" Namespace="calico-apiserver" Pod="calico-apiserver-79547c4d58-vbfr7" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" Dec 13 14:31:27.589667 env[1521]: 2024-12-13 14:31:27.573 [INFO][4689] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" Namespace="calico-apiserver" Pod="calico-apiserver-79547c4d58-vbfr7" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0", GenerateName:"calico-apiserver-79547c4d58-", Namespace:"calico-apiserver", SelfLink:"", UID:"5544211b-a287-4828-991d-868381eea812", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79547c4d58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f", Pod:"calico-apiserver-79547c4d58-vbfr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1425db48f8", MAC:"c6:99:71:d8:a4:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:27.589667 env[1521]: 2024-12-13 14:31:27.587 [INFO][4689] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f" Namespace="calico-apiserver" Pod="calico-apiserver-79547c4d58-vbfr7" WorkloadEndpoint="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" Dec 13 14:31:27.590123 kubelet[2732]: I1213 14:31:27.589769 2732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-f5p2k" podStartSLOduration=45.589674615 podStartE2EDuration="45.589674615s" podCreationTimestamp="2024-12-13 14:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:31:27.281802081 +0000 UTC m=+59.429601236" watchObservedRunningTime="2024-12-13 14:31:27.589674615 +0000 UTC m=+59.737473770" Dec 13 14:31:27.647444 env[1521]: time="2024-12-13T14:31:27.638176747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:27.647444 env[1521]: time="2024-12-13T14:31:27.638216747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:27.647444 env[1521]: time="2024-12-13T14:31:27.638230747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:27.647444 env[1521]: time="2024-12-13T14:31:27.638365147Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f pid=4749 runtime=io.containerd.runc.v2 Dec 13 14:31:27.747948 systemd-networkd[1701]: calic57a349aac0: Gained IPv6LL Dec 13 14:31:27.771000 audit[4779]: NETFILTER_CFG table=filter:115 family=2 entries=42 op=nft_register_chain pid=4779 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:31:27.771000 audit[4779]: SYSCALL arch=c000003e syscall=46 success=yes exit=22672 a0=3 a1=7fff231decb0 a2=0 a3=7fff231dec9c items=0 ppid=3867 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:27.771000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:31:27.818839 env[1521]: time="2024-12-13T14:31:27.818793736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79547c4d58-vbfr7,Uid:5544211b-a287-4828-991d-868381eea812,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f\"" Dec 13 14:31:27.822056 env[1521]: time="2024-12-13T14:31:27.822017945Z" level=info msg="CreateContainer within sandbox \"193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 14:31:27.862536 systemd[1]: run-netns-cni\x2dc998f163\x2d66db\x2d751f\x2d50cb\x2d160a4ae4bb9b.mount: Deactivated successfully. Dec 13 14:31:27.888755 env[1521]: time="2024-12-13T14:31:27.887847023Z" level=info msg="CreateContainer within sandbox \"193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"50ca63bae88ff8faf8e795b80c09d6b804edff9a9579468d048cda073a4f1467\"" Dec 13 14:31:27.889538 env[1521]: time="2024-12-13T14:31:27.889503228Z" level=info msg="StartContainer for \"50ca63bae88ff8faf8e795b80c09d6b804edff9a9579468d048cda073a4f1467\"" Dec 13 14:31:28.008237 env[1521]: time="2024-12-13T14:31:28.008197649Z" level=info msg="StopPodSandbox for \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\"" Dec 13 14:31:28.156621 env[1521]: 2024-12-13 14:31:28.071 [WARNING][4836] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fc5c0614-dc8c-44ff-bad4-378662704e03", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7", Pod:"coredns-76f75df574-mtlt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia9030ca0f03", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:28.156621 env[1521]: 2024-12-13 14:31:28.071 [INFO][4836] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Dec 13 14:31:28.156621 env[1521]: 2024-12-13 14:31:28.071 [INFO][4836] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" iface="eth0" netns="" Dec 13 14:31:28.156621 env[1521]: 2024-12-13 14:31:28.071 [INFO][4836] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Dec 13 14:31:28.156621 env[1521]: 2024-12-13 14:31:28.071 [INFO][4836] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Dec 13 14:31:28.156621 env[1521]: 2024-12-13 14:31:28.144 [INFO][4847] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" HandleID="k8s-pod-network.0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" Dec 13 14:31:28.156621 env[1521]: 2024-12-13 14:31:28.144 [INFO][4847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:28.156621 env[1521]: 2024-12-13 14:31:28.144 [INFO][4847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:28.156621 env[1521]: 2024-12-13 14:31:28.151 [WARNING][4847] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" HandleID="k8s-pod-network.0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" Dec 13 14:31:28.156621 env[1521]: 2024-12-13 14:31:28.151 [INFO][4847] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" HandleID="k8s-pod-network.0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" Dec 13 14:31:28.156621 env[1521]: 2024-12-13 14:31:28.152 [INFO][4847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:28.156621 env[1521]: 2024-12-13 14:31:28.155 [INFO][4836] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Dec 13 14:31:28.157401 env[1521]: time="2024-12-13T14:31:28.157360950Z" level=info msg="TearDown network for sandbox \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\" successfully" Dec 13 14:31:28.157505 env[1521]: time="2024-12-13T14:31:28.157487350Z" level=info msg="StopPodSandbox for \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\" returns successfully" Dec 13 14:31:28.158145 env[1521]: time="2024-12-13T14:31:28.158118352Z" level=info msg="RemovePodSandbox for \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\"" Dec 13 14:31:28.158334 env[1521]: time="2024-12-13T14:31:28.158277552Z" level=info msg="Forcibly stopping sandbox \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\"" Dec 13 14:31:28.163881 env[1521]: time="2024-12-13T14:31:28.163850967Z" level=info msg="StartContainer for \"50ca63bae88ff8faf8e795b80c09d6b804edff9a9579468d048cda073a4f1467\" returns successfully" Dec 13 14:31:28.257352 kubelet[2732]: I1213 14:31:28.256822 2732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79547c4d58-vbfr7" podStartSLOduration=34.256773716 podStartE2EDuration="34.256773716s" podCreationTimestamp="2024-12-13 14:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:31:28.256342915 +0000 UTC m=+60.404142070" watchObservedRunningTime="2024-12-13 14:31:28.256773716 +0000 UTC m=+60.404572971" Dec 13 14:31:28.259934 systemd-networkd[1701]: calia9030ca0f03: Gained IPv6LL Dec 13 14:31:28.323000 audit[4879]: NETFILTER_CFG table=filter:116 family=2 entries=10 op=nft_register_rule pid=4879 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:28.323000 audit[4879]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff2b16bc90 a2=0 a3=7fff2b16bc7c items=0 ppid=2867 pid=4879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:28.323000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:28.327000 audit[4879]: NETFILTER_CFG table=nat:117 family=2 entries=20 op=nft_register_rule pid=4879 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:28.327000 audit[4879]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff2b16bc90 a2=0 a3=7fff2b16bc7c items=0 ppid=2867 pid=4879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:28.327000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:28.398434 env[1521]: 2024-12-13 14:31:28.289 [WARNING][4868] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fc5c0614-dc8c-44ff-bad4-378662704e03", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"180fd3e095684ef3541f0b1f91618551bafad4c608be129f2d3d44f8479b6ba7", Pod:"coredns-76f75df574-mtlt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia9030ca0f03", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:28.398434 env[1521]: 2024-12-13 14:31:28.289 [INFO][4868] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Dec 13 14:31:28.398434 env[1521]: 2024-12-13 14:31:28.289 [INFO][4868] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" iface="eth0" netns="" Dec 13 14:31:28.398434 env[1521]: 2024-12-13 14:31:28.289 [INFO][4868] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Dec 13 14:31:28.398434 env[1521]: 2024-12-13 14:31:28.289 [INFO][4868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Dec 13 14:31:28.398434 env[1521]: 2024-12-13 14:31:28.374 [INFO][4875] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" HandleID="k8s-pod-network.0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" Dec 13 14:31:28.398434 env[1521]: 2024-12-13 14:31:28.375 [INFO][4875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:28.398434 env[1521]: 2024-12-13 14:31:28.375 [INFO][4875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:28.398434 env[1521]: 2024-12-13 14:31:28.390 [WARNING][4875] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" HandleID="k8s-pod-network.0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" Dec 13 14:31:28.398434 env[1521]: 2024-12-13 14:31:28.390 [INFO][4875] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" HandleID="k8s-pod-network.0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--mtlt2-eth0" Dec 13 14:31:28.398434 env[1521]: 2024-12-13 14:31:28.392 [INFO][4875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:28.398434 env[1521]: 2024-12-13 14:31:28.396 [INFO][4868] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c" Dec 13 14:31:28.399235 env[1521]: time="2024-12-13T14:31:28.399193299Z" level=info msg="TearDown network for sandbox \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\" successfully" Dec 13 14:31:28.423879 env[1521]: time="2024-12-13T14:31:28.423770165Z" level=info msg="RemovePodSandbox \"0944619dc72561d20e0523cae75fcaf713d71cc5b3194a09fbc22a0dc3e35f0c\" returns successfully" Dec 13 14:31:28.424708 env[1521]: time="2024-12-13T14:31:28.424678567Z" level=info msg="StopPodSandbox for \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\"" Dec 13 14:31:28.588323 env[1521]: 2024-12-13 14:31:28.512 [WARNING][4896] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0", GenerateName:"calico-apiserver-79547c4d58-", Namespace:"calico-apiserver", SelfLink:"", UID:"5544211b-a287-4828-991d-868381eea812", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79547c4d58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f", Pod:"calico-apiserver-79547c4d58-vbfr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1425db48f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:28.588323 env[1521]: 2024-12-13 14:31:28.512 [INFO][4896] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Dec 13 14:31:28.588323 env[1521]: 2024-12-13 14:31:28.512 [INFO][4896] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" iface="eth0" netns="" Dec 13 14:31:28.588323 env[1521]: 2024-12-13 14:31:28.512 [INFO][4896] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Dec 13 14:31:28.588323 env[1521]: 2024-12-13 14:31:28.512 [INFO][4896] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Dec 13 14:31:28.588323 env[1521]: 2024-12-13 14:31:28.571 [INFO][4903] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" HandleID="k8s-pod-network.d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" Dec 13 14:31:28.588323 env[1521]: 2024-12-13 14:31:28.571 [INFO][4903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:28.588323 env[1521]: 2024-12-13 14:31:28.571 [INFO][4903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:28.588323 env[1521]: 2024-12-13 14:31:28.580 [WARNING][4903] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" HandleID="k8s-pod-network.d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" Dec 13 14:31:28.588323 env[1521]: 2024-12-13 14:31:28.580 [INFO][4903] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" HandleID="k8s-pod-network.d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" Dec 13 14:31:28.588323 env[1521]: 2024-12-13 14:31:28.581 [INFO][4903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:28.588323 env[1521]: 2024-12-13 14:31:28.585 [INFO][4896] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Dec 13 14:31:28.589841 env[1521]: time="2024-12-13T14:31:28.589790411Z" level=info msg="TearDown network for sandbox \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\" successfully" Dec 13 14:31:28.590003 env[1521]: time="2024-12-13T14:31:28.589976911Z" level=info msg="StopPodSandbox for \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\" returns successfully" Dec 13 14:31:28.590625 env[1521]: time="2024-12-13T14:31:28.590596313Z" level=info msg="RemovePodSandbox for \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\"" Dec 13 14:31:28.590794 env[1521]: time="2024-12-13T14:31:28.590745513Z" level=info msg="Forcibly stopping sandbox \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\"" Dec 13 14:31:28.772628 systemd-networkd[1701]: calie1425db48f8: Gained IPv6LL Dec 13 14:31:28.774250 env[1521]: 2024-12-13 14:31:28.695 [WARNING][4922] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0", GenerateName:"calico-apiserver-79547c4d58-", Namespace:"calico-apiserver", SelfLink:"", UID:"5544211b-a287-4828-991d-868381eea812", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79547c4d58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"193ee96595faacb0da84b61ca1fd87987347613408b60a9592c39d4ad935f69f", Pod:"calico-apiserver-79547c4d58-vbfr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1425db48f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:28.774250 env[1521]: 2024-12-13 14:31:28.695 [INFO][4922] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Dec 13 14:31:28.774250 env[1521]: 2024-12-13 14:31:28.695 [INFO][4922] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" iface="eth0" netns="" Dec 13 14:31:28.774250 env[1521]: 2024-12-13 14:31:28.695 [INFO][4922] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Dec 13 14:31:28.774250 env[1521]: 2024-12-13 14:31:28.695 [INFO][4922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Dec 13 14:31:28.774250 env[1521]: 2024-12-13 14:31:28.756 [INFO][4928] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" HandleID="k8s-pod-network.d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" Dec 13 14:31:28.774250 env[1521]: 2024-12-13 14:31:28.756 [INFO][4928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:28.774250 env[1521]: 2024-12-13 14:31:28.756 [INFO][4928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:28.774250 env[1521]: 2024-12-13 14:31:28.763 [WARNING][4928] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" HandleID="k8s-pod-network.d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" Dec 13 14:31:28.774250 env[1521]: 2024-12-13 14:31:28.763 [INFO][4928] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" HandleID="k8s-pod-network.d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--vbfr7-eth0" Dec 13 14:31:28.774250 env[1521]: 2024-12-13 14:31:28.765 [INFO][4928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:28.774250 env[1521]: 2024-12-13 14:31:28.767 [INFO][4922] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5" Dec 13 14:31:28.774250 env[1521]: time="2024-12-13T14:31:28.774171706Z" level=info msg="TearDown network for sandbox \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\" successfully" Dec 13 14:31:28.782888 env[1521]: time="2024-12-13T14:31:28.782855229Z" level=info msg="RemovePodSandbox \"d77d4dbd61d8be11c8fbc0fb2c35edaea7d2c6399a4ef0c3a37e0c75e8ea9dc5\" returns successfully" Dec 13 14:31:28.783571 env[1521]: time="2024-12-13T14:31:28.783540731Z" level=info msg="StopPodSandbox for \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\"" Dec 13 14:31:29.005308 env[1521]: 2024-12-13 14:31:28.922 [WARNING][4949] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"91fc512f-0007-48c4-b73e-c4c21c09d9e8", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863", Pod:"coredns-76f75df574-f5p2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic57a349aac0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:29.005308 env[1521]: 2024-12-13 14:31:28.922 [INFO][4949] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Dec 13 14:31:29.005308 env[1521]: 2024-12-13 14:31:28.922 [INFO][4949] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" iface="eth0" netns="" Dec 13 14:31:29.005308 env[1521]: 2024-12-13 14:31:28.922 [INFO][4949] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Dec 13 14:31:29.005308 env[1521]: 2024-12-13 14:31:28.922 [INFO][4949] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Dec 13 14:31:29.005308 env[1521]: 2024-12-13 14:31:28.988 [INFO][4956] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" HandleID="k8s-pod-network.6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" Dec 13 14:31:29.005308 env[1521]: 2024-12-13 14:31:28.988 [INFO][4956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:29.005308 env[1521]: 2024-12-13 14:31:28.988 [INFO][4956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:29.005308 env[1521]: 2024-12-13 14:31:28.999 [WARNING][4956] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" HandleID="k8s-pod-network.6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" Dec 13 14:31:29.005308 env[1521]: 2024-12-13 14:31:29.000 [INFO][4956] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" HandleID="k8s-pod-network.6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" Dec 13 14:31:29.005308 env[1521]: 2024-12-13 14:31:29.002 [INFO][4956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:29.005308 env[1521]: 2024-12-13 14:31:29.004 [INFO][4949] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Dec 13 14:31:29.006166 env[1521]: time="2024-12-13T14:31:29.005338527Z" level=info msg="TearDown network for sandbox \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\" successfully" Dec 13 14:31:29.006166 env[1521]: time="2024-12-13T14:31:29.005376327Z" level=info msg="StopPodSandbox for \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\" returns successfully" Dec 13 14:31:29.006802 env[1521]: time="2024-12-13T14:31:29.006767630Z" level=info msg="RemovePodSandbox for \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\"" Dec 13 14:31:29.006953 env[1521]: time="2024-12-13T14:31:29.006811531Z" level=info msg="Forcibly stopping sandbox \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\"" Dec 13 14:31:29.151807 env[1521]: 2024-12-13 14:31:29.082 [WARNING][4976] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"91fc512f-0007-48c4-b73e-c4c21c09d9e8", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"63eb8646f721dc2a995dbe9b9ba0c3bb1124707f6d17452aa3c475edb2b1f863", Pod:"coredns-76f75df574-f5p2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic57a349aac0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:29.151807 env[1521]: 2024-12-13 14:31:29.082 [INFO][4976] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Dec 13 14:31:29.151807 env[1521]: 2024-12-13 14:31:29.082 [INFO][4976] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" iface="eth0" netns="" Dec 13 14:31:29.151807 env[1521]: 2024-12-13 14:31:29.082 [INFO][4976] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Dec 13 14:31:29.151807 env[1521]: 2024-12-13 14:31:29.082 [INFO][4976] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Dec 13 14:31:29.151807 env[1521]: 2024-12-13 14:31:29.136 [INFO][4982] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" HandleID="k8s-pod-network.6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" Dec 13 14:31:29.151807 env[1521]: 2024-12-13 14:31:29.138 [INFO][4982] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:29.151807 env[1521]: 2024-12-13 14:31:29.138 [INFO][4982] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:29.151807 env[1521]: 2024-12-13 14:31:29.145 [WARNING][4982] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" HandleID="k8s-pod-network.6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" Dec 13 14:31:29.151807 env[1521]: 2024-12-13 14:31:29.145 [INFO][4982] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" HandleID="k8s-pod-network.6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-coredns--76f75df574--f5p2k-eth0" Dec 13 14:31:29.151807 env[1521]: 2024-12-13 14:31:29.146 [INFO][4982] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:29.151807 env[1521]: 2024-12-13 14:31:29.148 [INFO][4976] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b" Dec 13 14:31:29.152925 env[1521]: time="2024-12-13T14:31:29.152878219Z" level=info msg="TearDown network for sandbox \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\" successfully" Dec 13 14:31:29.163326 env[1521]: time="2024-12-13T14:31:29.163282947Z" level=info msg="RemovePodSandbox \"6366f83233467e3cb99a033fe8432bdcc2995e38c1ae4c26d4f7e6832483796b\" returns successfully" Dec 13 14:31:29.165741 env[1521]: time="2024-12-13T14:31:29.165702754Z" level=info msg="StopPodSandbox for \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\"" Dec 13 14:31:29.245107 kubelet[2732]: I1213 14:31:29.245076 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:31:29.300461 env[1521]: 2024-12-13 14:31:29.240 [WARNING][5003] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0", GenerateName:"calico-kube-controllers-547bf79797-", Namespace:"calico-system", SelfLink:"", UID:"8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"547bf79797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440", Pod:"calico-kube-controllers-547bf79797-klbv4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11f205dbd28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:29.300461 env[1521]: 2024-12-13 14:31:29.241 [INFO][5003] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Dec 13 14:31:29.300461 env[1521]: 2024-12-13 14:31:29.241 [INFO][5003] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" iface="eth0" netns="" Dec 13 14:31:29.300461 env[1521]: 2024-12-13 14:31:29.241 [INFO][5003] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Dec 13 14:31:29.300461 env[1521]: 2024-12-13 14:31:29.241 [INFO][5003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Dec 13 14:31:29.300461 env[1521]: 2024-12-13 14:31:29.277 [INFO][5009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" HandleID="k8s-pod-network.53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" Dec 13 14:31:29.300461 env[1521]: 2024-12-13 14:31:29.277 [INFO][5009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:29.300461 env[1521]: 2024-12-13 14:31:29.277 [INFO][5009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:29.300461 env[1521]: 2024-12-13 14:31:29.297 [WARNING][5009] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" HandleID="k8s-pod-network.53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" Dec 13 14:31:29.300461 env[1521]: 2024-12-13 14:31:29.297 [INFO][5009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" HandleID="k8s-pod-network.53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" Dec 13 14:31:29.300461 env[1521]: 2024-12-13 14:31:29.298 [INFO][5009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:29.300461 env[1521]: 2024-12-13 14:31:29.299 [INFO][5003] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Dec 13 14:31:29.300991 env[1521]: time="2024-12-13T14:31:29.300501612Z" level=info msg="TearDown network for sandbox \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\" successfully" Dec 13 14:31:29.300991 env[1521]: time="2024-12-13T14:31:29.300538313Z" level=info msg="StopPodSandbox for \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\" returns successfully" Dec 13 14:31:29.301575 env[1521]: time="2024-12-13T14:31:29.301540215Z" level=info msg="RemovePodSandbox for \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\"" Dec 13 14:31:29.301678 env[1521]: time="2024-12-13T14:31:29.301595415Z" level=info msg="Forcibly stopping sandbox \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\"" Dec 13 14:31:29.358000 audit[5031]: NETFILTER_CFG table=filter:118 family=2 entries=10 op=nft_register_rule pid=5031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:29.358000 audit[5031]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc70ef52d0 a2=0 a3=7ffc70ef52bc items=0 ppid=2867 pid=5031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:29.358000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:29.421000 audit[5031]: NETFILTER_CFG table=nat:119 family=2 entries=56 op=nft_register_chain pid=5031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:29.421000 audit[5031]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc70ef52d0 a2=0 a3=7ffc70ef52bc items=0 ppid=2867 pid=5031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:29.421000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:29.440939 env[1521]: 2024-12-13 14:31:29.395 [WARNING][5032] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0", GenerateName:"calico-kube-controllers-547bf79797-", Namespace:"calico-system", SelfLink:"", UID:"8e9c26fd-792a-4f23-8d1f-2da5a5cf95e6", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"547bf79797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440", Pod:"calico-kube-controllers-547bf79797-klbv4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11f205dbd28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:29.440939 env[1521]: 2024-12-13 14:31:29.395 [INFO][5032] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Dec 13 14:31:29.440939 env[1521]: 2024-12-13 14:31:29.395 [INFO][5032] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" iface="eth0" netns="" Dec 13 14:31:29.440939 env[1521]: 2024-12-13 14:31:29.395 [INFO][5032] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Dec 13 14:31:29.440939 env[1521]: 2024-12-13 14:31:29.395 [INFO][5032] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Dec 13 14:31:29.440939 env[1521]: 2024-12-13 14:31:29.427 [INFO][5039] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" HandleID="k8s-pod-network.53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" Dec 13 14:31:29.440939 env[1521]: 2024-12-13 14:31:29.428 [INFO][5039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:29.440939 env[1521]: 2024-12-13 14:31:29.428 [INFO][5039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:29.440939 env[1521]: 2024-12-13 14:31:29.434 [WARNING][5039] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" HandleID="k8s-pod-network.53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" Dec 13 14:31:29.440939 env[1521]: 2024-12-13 14:31:29.434 [INFO][5039] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" HandleID="k8s-pod-network.53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--kube--controllers--547bf79797--klbv4-eth0" Dec 13 14:31:29.440939 env[1521]: 2024-12-13 14:31:29.438 [INFO][5039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:29.440939 env[1521]: 2024-12-13 14:31:29.440 [INFO][5032] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867" Dec 13 14:31:29.441429 env[1521]: time="2024-12-13T14:31:29.441383788Z" level=info msg="TearDown network for sandbox \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\" successfully" Dec 13 14:31:29.449185 env[1521]: time="2024-12-13T14:31:29.449147908Z" level=info msg="RemovePodSandbox \"53400c2440b3aebc1028ad19f61b5890f7a867ceae25b178ba35d352ea25b867\" returns successfully" Dec 13 14:31:29.449856 env[1521]: time="2024-12-13T14:31:29.449832110Z" level=info msg="StopPodSandbox for \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\"" Dec 13 14:31:29.570929 env[1521]: 2024-12-13 14:31:29.512 [WARNING][5060] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"54fdb1bd-71c2-40c0-a409-956b5f88cc85", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462", Pod:"csi-node-driver-889kf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaa8ff3993d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:29.570929 env[1521]: 2024-12-13 14:31:29.512 [INFO][5060] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Dec 13 14:31:29.570929 env[1521]: 2024-12-13 14:31:29.513 [INFO][5060] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" iface="eth0" netns="" Dec 13 14:31:29.570929 env[1521]: 2024-12-13 14:31:29.513 [INFO][5060] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Dec 13 14:31:29.570929 env[1521]: 2024-12-13 14:31:29.513 [INFO][5060] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Dec 13 14:31:29.570929 env[1521]: 2024-12-13 14:31:29.550 [INFO][5066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" HandleID="k8s-pod-network.228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" Dec 13 14:31:29.570929 env[1521]: 2024-12-13 14:31:29.552 [INFO][5066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:29.570929 env[1521]: 2024-12-13 14:31:29.552 [INFO][5066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:29.570929 env[1521]: 2024-12-13 14:31:29.559 [WARNING][5066] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" HandleID="k8s-pod-network.228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" Dec 13 14:31:29.570929 env[1521]: 2024-12-13 14:31:29.566 [INFO][5066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" HandleID="k8s-pod-network.228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" Dec 13 14:31:29.570929 env[1521]: 2024-12-13 14:31:29.568 [INFO][5066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:29.570929 env[1521]: 2024-12-13 14:31:29.569 [INFO][5060] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Dec 13 14:31:29.571535 env[1521]: time="2024-12-13T14:31:29.570964133Z" level=info msg="TearDown network for sandbox \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\" successfully" Dec 13 14:31:29.571535 env[1521]: time="2024-12-13T14:31:29.571001333Z" level=info msg="StopPodSandbox for \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\" returns successfully" Dec 13 14:31:29.571792 env[1521]: time="2024-12-13T14:31:29.571758435Z" level=info msg="RemovePodSandbox for \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\"" Dec 13 14:31:29.571912 env[1521]: time="2024-12-13T14:31:29.571798235Z" level=info msg="Forcibly stopping sandbox \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\"" Dec 13 14:31:29.714305 env[1521]: 2024-12-13 14:31:29.658 [WARNING][5086] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"54fdb1bd-71c2-40c0-a409-956b5f88cc85", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462", Pod:"csi-node-driver-889kf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaa8ff3993d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:29.714305 env[1521]: 2024-12-13 14:31:29.658 [INFO][5086] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Dec 13 14:31:29.714305 env[1521]: 2024-12-13 14:31:29.658 [INFO][5086] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" iface="eth0" netns="" Dec 13 14:31:29.714305 env[1521]: 2024-12-13 14:31:29.659 [INFO][5086] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Dec 13 14:31:29.714305 env[1521]: 2024-12-13 14:31:29.659 [INFO][5086] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Dec 13 14:31:29.714305 env[1521]: 2024-12-13 14:31:29.694 [INFO][5092] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" HandleID="k8s-pod-network.228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" Dec 13 14:31:29.714305 env[1521]: 2024-12-13 14:31:29.694 [INFO][5092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:29.714305 env[1521]: 2024-12-13 14:31:29.694 [INFO][5092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:29.714305 env[1521]: 2024-12-13 14:31:29.704 [WARNING][5092] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" HandleID="k8s-pod-network.228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" Dec 13 14:31:29.714305 env[1521]: 2024-12-13 14:31:29.705 [INFO][5092] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" HandleID="k8s-pod-network.228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-csi--node--driver--889kf-eth0" Dec 13 14:31:29.714305 env[1521]: 2024-12-13 14:31:29.709 [INFO][5092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:29.714305 env[1521]: 2024-12-13 14:31:29.710 [INFO][5086] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261" Dec 13 14:31:29.714305 env[1521]: time="2024-12-13T14:31:29.712136608Z" level=info msg="TearDown network for sandbox \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\" successfully" Dec 13 14:31:29.721901 env[1521]: time="2024-12-13T14:31:29.721865134Z" level=info msg="RemovePodSandbox \"228283d40bb806337b25331fe46918a421e6415a1697986d0243e9b3d2a4a261\" returns successfully" Dec 13 14:31:29.722598 env[1521]: time="2024-12-13T14:31:29.722573136Z" level=info msg="StopPodSandbox for \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\"" Dec 13 14:31:29.858058 env[1521]: time="2024-12-13T14:31:29.858010997Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:29.873668 env[1521]: time="2024-12-13T14:31:29.871800233Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:29.877442 env[1521]: time="2024-12-13T14:31:29.877389448Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:29.884290 env[1521]: 2024-12-13 14:31:29.836 [WARNING][5112] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0", GenerateName:"calico-apiserver-79547c4d58-", Namespace:"calico-apiserver", SelfLink:"", UID:"b609b60d-e7a7-460b-9ba5-524f51fccf87", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79547c4d58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac", Pod:"calico-apiserver-79547c4d58-hf5ns", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5f53f95e44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:29.884290 env[1521]: 2024-12-13 14:31:29.836 [INFO][5112] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Dec 13 14:31:29.884290 env[1521]: 2024-12-13 14:31:29.836 [INFO][5112] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" iface="eth0" netns="" Dec 13 14:31:29.884290 env[1521]: 2024-12-13 14:31:29.836 [INFO][5112] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Dec 13 14:31:29.884290 env[1521]: 2024-12-13 14:31:29.836 [INFO][5112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Dec 13 14:31:29.884290 env[1521]: 2024-12-13 14:31:29.868 [INFO][5119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" HandleID="k8s-pod-network.41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" Dec 13 14:31:29.884290 env[1521]: 2024-12-13 14:31:29.868 [INFO][5119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:29.884290 env[1521]: 2024-12-13 14:31:29.869 [INFO][5119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:29.884290 env[1521]: 2024-12-13 14:31:29.876 [WARNING][5119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" HandleID="k8s-pod-network.41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" Dec 13 14:31:29.884290 env[1521]: 2024-12-13 14:31:29.876 [INFO][5119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" HandleID="k8s-pod-network.41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" Dec 13 14:31:29.884290 env[1521]: 2024-12-13 14:31:29.878 [INFO][5119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:29.884290 env[1521]: 2024-12-13 14:31:29.881 [INFO][5112] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Dec 13 14:31:29.886647 env[1521]: time="2024-12-13T14:31:29.886598573Z" level=info msg="TearDown network for sandbox \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\" successfully" Dec 13 14:31:29.886835 env[1521]: time="2024-12-13T14:31:29.886810173Z" level=info msg="StopPodSandbox for \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\" returns successfully" Dec 13 14:31:29.887461 env[1521]: time="2024-12-13T14:31:29.887434975Z" level=info msg="RemovePodSandbox for \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\"" Dec 13 14:31:29.887608 env[1521]: time="2024-12-13T14:31:29.887564775Z" level=info msg="Forcibly stopping sandbox \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\"" Dec 13 14:31:29.891775 env[1521]: time="2024-12-13T14:31:29.891745787Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:29.892766 env[1521]: time="2024-12-13T14:31:29.892225788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 14:31:29.913543 env[1521]: time="2024-12-13T14:31:29.913503445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 14:31:29.929875 env[1521]: time="2024-12-13T14:31:29.929840188Z" level=info msg="CreateContainer within sandbox \"7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 14:31:29.979086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1930072313.mount: Deactivated successfully. Dec 13 14:31:29.989984 env[1521]: time="2024-12-13T14:31:29.989939548Z" level=info msg="CreateContainer within sandbox \"7abc272ca5836969665e41e4f88ce9cc6c752fea1817b113caa6b52b1a8c5440\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d90320f269d04825386e2fb516d0b3c126aee2a69f52f0015f08ab64596e3bc5\"" Dec 13 14:31:29.990864 env[1521]: time="2024-12-13T14:31:29.990831750Z" level=info msg="StartContainer for \"d90320f269d04825386e2fb516d0b3c126aee2a69f52f0015f08ab64596e3bc5\"" Dec 13 14:31:30.092238 env[1521]: 2024-12-13 14:31:30.030 [WARNING][5141] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0", GenerateName:"calico-apiserver-79547c4d58-", Namespace:"calico-apiserver", SelfLink:"", UID:"b609b60d-e7a7-460b-9ba5-524f51fccf87", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 30, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79547c4d58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.6-a-e445ccd8ad", ContainerID:"0e0c9f03332eee06e308ff28cde0364acc7600ff9f49fd251e16d9e16062fcac", Pod:"calico-apiserver-79547c4d58-hf5ns", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5f53f95e44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:31:30.092238 env[1521]: 2024-12-13 14:31:30.031 [INFO][5141] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Dec 13 14:31:30.092238 env[1521]: 2024-12-13 14:31:30.031 [INFO][5141] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" iface="eth0" netns="" Dec 13 14:31:30.092238 env[1521]: 2024-12-13 14:31:30.031 [INFO][5141] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Dec 13 14:31:30.092238 env[1521]: 2024-12-13 14:31:30.031 [INFO][5141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Dec 13 14:31:30.092238 env[1521]: 2024-12-13 14:31:30.079 [INFO][5158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" HandleID="k8s-pod-network.41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" Dec 13 14:31:30.092238 env[1521]: 2024-12-13 14:31:30.079 [INFO][5158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:31:30.092238 env[1521]: 2024-12-13 14:31:30.080 [INFO][5158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:31:30.092238 env[1521]: 2024-12-13 14:31:30.088 [WARNING][5158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" HandleID="k8s-pod-network.41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" Dec 13 14:31:30.092238 env[1521]: 2024-12-13 14:31:30.088 [INFO][5158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" HandleID="k8s-pod-network.41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Workload="ci--3510.3.6--a--e445ccd8ad-k8s-calico--apiserver--79547c4d58--hf5ns-eth0" Dec 13 14:31:30.092238 env[1521]: 2024-12-13 14:31:30.090 [INFO][5158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:31:30.092238 env[1521]: 2024-12-13 14:31:30.090 [INFO][5141] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7" Dec 13 14:31:30.093173 env[1521]: time="2024-12-13T14:31:30.093122321Z" level=info msg="TearDown network for sandbox \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\" successfully" Dec 13 14:31:30.102707 env[1521]: time="2024-12-13T14:31:30.102667446Z" level=info msg="RemovePodSandbox \"41aba980fabb7d68f22cb4ebda9b9bf46ff5cc979e78acc37b0f6c9f9fd4b2c7\" returns successfully" Dec 13 14:31:30.168027 env[1521]: time="2024-12-13T14:31:30.167977118Z" level=info msg="StartContainer for \"d90320f269d04825386e2fb516d0b3c126aee2a69f52f0015f08ab64596e3bc5\" returns successfully" Dec 13 14:31:30.381472 kubelet[2732]: I1213 14:31:30.381430 2732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-547bf79797-klbv4" podStartSLOduration=29.822053057 podStartE2EDuration="36.381372382s" podCreationTimestamp="2024-12-13 14:30:54 +0000 UTC" firstStartedPulling="2024-12-13 14:31:23.334221066 +0000 UTC m=+55.482020221" lastFinishedPulling="2024-12-13 14:31:29.893540391 +0000 UTC m=+62.041339546" observedRunningTime="2024-12-13 14:31:30.281700919 +0000 UTC m=+62.429500074" watchObservedRunningTime="2024-12-13 14:31:30.381372382 +0000 UTC m=+62.529171537" Dec 13 14:31:31.415870 env[1521]: time="2024-12-13T14:31:31.415823904Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:31.423021 env[1521]: time="2024-12-13T14:31:31.422985522Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:31.426298 env[1521]: time="2024-12-13T14:31:31.426268631Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:31.429470 env[1521]: time="2024-12-13T14:31:31.429439539Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:31.429860 env[1521]: time="2024-12-13T14:31:31.429830140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 14:31:31.432482 env[1521]: time="2024-12-13T14:31:31.432450147Z" level=info msg="CreateContainer within sandbox \"701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 14:31:31.460220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1393942522.mount: Deactivated successfully. Dec 13 14:31:31.463065 env[1521]: time="2024-12-13T14:31:31.463027227Z" level=info msg="CreateContainer within sandbox \"701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"defeeeccddeaa88d39897bae27d257e94fdf1ebcf8c468313bd110796c17312d\"" Dec 13 14:31:31.464122 env[1521]: time="2024-12-13T14:31:31.464094930Z" level=info msg="StartContainer for \"defeeeccddeaa88d39897bae27d257e94fdf1ebcf8c468313bd110796c17312d\"" Dec 13 14:31:31.543900 env[1521]: time="2024-12-13T14:31:31.543858539Z" level=info msg="StartContainer for \"defeeeccddeaa88d39897bae27d257e94fdf1ebcf8c468313bd110796c17312d\" returns successfully" Dec 13 14:31:31.558304 env[1521]: time="2024-12-13T14:31:31.558015476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 14:31:33.215993 env[1521]: time="2024-12-13T14:31:33.215941787Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:33.222819 env[1521]: time="2024-12-13T14:31:33.222779205Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:33.228465 env[1521]: time="2024-12-13T14:31:33.228431119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:33.232300 env[1521]: time="2024-12-13T14:31:33.232268229Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:31:33.233090 env[1521]: time="2024-12-13T14:31:33.232679930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 14:31:33.235898 env[1521]: time="2024-12-13T14:31:33.235864638Z" level=info msg="CreateContainer within sandbox \"701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 14:31:33.277031 env[1521]: time="2024-12-13T14:31:33.276986644Z" level=info msg="CreateContainer within sandbox \"701497d8fea15313a52ff7aacdd49c7018e74a4e385e1f3c5930c4c6a2697462\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"284d4a37847bf47ed1719473dabaf7eb1c4b557d4d92a75ea1e854f9a4d5567c\"" Dec 13 14:31:33.277855 env[1521]: time="2024-12-13T14:31:33.277822947Z" level=info msg="StartContainer for \"284d4a37847bf47ed1719473dabaf7eb1c4b557d4d92a75ea1e854f9a4d5567c\"" Dec 13 14:31:33.332386 systemd[1]: run-containerd-runc-k8s.io-284d4a37847bf47ed1719473dabaf7eb1c4b557d4d92a75ea1e854f9a4d5567c-runc.oGANLe.mount: Deactivated successfully. Dec 13 14:31:33.384487 env[1521]: time="2024-12-13T14:31:33.384439521Z" level=info msg="StartContainer for \"284d4a37847bf47ed1719473dabaf7eb1c4b557d4d92a75ea1e854f9a4d5567c\" returns successfully" Dec 13 14:31:34.128515 kubelet[2732]: I1213 14:31:34.128479 2732 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 14:31:34.128515 kubelet[2732]: I1213 14:31:34.128520 2732 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 14:31:37.114350 kubelet[2732]: I1213 14:31:37.114249 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:31:37.133955 kubelet[2732]: I1213 14:31:37.133918 2732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-889kf" podStartSLOduration=34.291988055 podStartE2EDuration="43.133871758s" podCreationTimestamp="2024-12-13 14:30:54 +0000 UTC" firstStartedPulling="2024-12-13 14:31:24.39177203 +0000 UTC m=+56.539571185" lastFinishedPulling="2024-12-13 14:31:33.233655733 +0000 UTC m=+65.381454888" observedRunningTime="2024-12-13 14:31:34.28621214 +0000 UTC m=+66.434011395" watchObservedRunningTime="2024-12-13 14:31:37.133871758 +0000 UTC m=+69.281670913" Dec 13 14:31:37.157000 audit[5282]: NETFILTER_CFG table=filter:120 family=2 entries=9 op=nft_register_rule pid=5282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:37.173758 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 14:31:37.173877 kernel: audit: type=1325 audit(1734100297.157:418): table=filter:120 family=2 entries=9 op=nft_register_rule pid=5282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:37.157000 audit[5282]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffec6bacb90 a2=0 a3=7ffec6bacb7c items=0 ppid=2867 pid=5282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:37.192758 kernel: audit: type=1300 audit(1734100297.157:418): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffec6bacb90 a2=0 a3=7ffec6bacb7c items=0 ppid=2867 pid=5282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:37.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:37.202632 kernel: audit: type=1327 audit(1734100297.157:418): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:37.203000 audit[5282]: NETFILTER_CFG table=nat:121 family=2 entries=27 op=nft_register_chain pid=5282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:37.203000 audit[5282]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffec6bacb90 a2=0 a3=7ffec6bacb7c items=0 ppid=2867 pid=5282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:37.233618 kernel: audit: type=1325 audit(1734100297.203:419): table=nat:121 family=2 entries=27 op=nft_register_chain pid=5282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:31:37.233733 kernel: audit: type=1300 audit(1734100297.203:419): arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffec6bacb90 a2=0 a3=7ffec6bacb7c items=0 ppid=2867 pid=5282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:31:37.233774 kernel: audit: type=1327 audit(1734100297.203:419): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:37.203000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:31:39.394460 systemd[1]: run-containerd-runc-k8s.io-d90320f269d04825386e2fb516d0b3c126aee2a69f52f0015f08ab64596e3bc5-runc.K8k7oK.mount: Deactivated successfully. Dec 13 14:31:57.493521 systemd[1]: run-containerd-runc-k8s.io-0c8807f5ea6dbd33b8c31a67d5730c9cc8fe2aec7030aae3b7f858606b495407-runc.HZKuGY.mount: Deactivated successfully. Dec 13 14:32:07.430767 kubelet[2732]: I1213 14:32:07.430697 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:32:07.474000 audit[5342]: NETFILTER_CFG table=filter:122 family=2 entries=8 op=nft_register_rule pid=5342 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:07.474000 audit[5342]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe31c9e060 a2=0 a3=7ffe31c9e04c items=0 ppid=2867 pid=5342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:07.517433 kernel: audit: type=1325 audit(1734100327.474:420): table=filter:122 family=2 entries=8 op=nft_register_rule pid=5342 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:07.517616 kernel: audit: type=1300 audit(1734100327.474:420): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe31c9e060 a2=0 a3=7ffe31c9e04c items=0 ppid=2867 pid=5342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:07.518749 kernel: audit: type=1327 audit(1734100327.474:420): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:07.474000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:07.526000 audit[5342]: NETFILTER_CFG table=nat:123 family=2 entries=34 op=nft_register_chain pid=5342 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:07.543424 kernel: audit: type=1325 audit(1734100327.526:421): table=nat:123 family=2 entries=34 op=nft_register_chain pid=5342 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:32:07.543506 kernel: audit: type=1300 audit(1734100327.526:421): arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffe31c9e060 a2=0 a3=7ffe31c9e04c items=0 ppid=2867 pid=5342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:07.526000 audit[5342]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffe31c9e060 a2=0 a3=7ffe31c9e04c items=0 ppid=2867 pid=5342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:07.526000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:07.557786 kernel: audit: type=1327 audit(1734100327.526:421): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:32:09.395653 systemd[1]: run-containerd-runc-k8s.io-d90320f269d04825386e2fb516d0b3c126aee2a69f52f0015f08ab64596e3bc5-runc.sBojRg.mount: Deactivated successfully. Dec 13 14:32:27.500074 systemd[1]: run-containerd-runc-k8s.io-0c8807f5ea6dbd33b8c31a67d5730c9cc8fe2aec7030aae3b7f858606b495407-runc.q6sy2n.mount: Deactivated successfully. Dec 13 14:32:28.285228 systemd[1]: run-containerd-runc-k8s.io-d90320f269d04825386e2fb516d0b3c126aee2a69f52f0015f08ab64596e3bc5-runc.1SXlgy.mount: Deactivated successfully. Dec 13 14:33:04.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.17:22-10.200.16.10:54654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:04.164915 systemd[1]: Started sshd@7-10.200.8.17:22-10.200.16.10:54654.service. Dec 13 14:33:04.181774 kernel: audit: type=1130 audit(1734100384.164:422): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.17:22-10.200.16.10:54654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:04.870000 audit[5481]: USER_ACCT pid=5481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:04.871069 sshd[5481]: Accepted publickey for core from 10.200.16.10 port 54654 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:04.889751 kernel: audit: type=1101 audit(1734100384.870:423): pid=5481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:04.889000 audit[5481]: CRED_ACQ pid=5481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:04.890477 sshd[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:04.896105 systemd[1]: Started session-10.scope. Dec 13 14:33:04.897066 systemd-logind[1505]: New session 10 of user core. Dec 13 14:33:04.916930 kernel: audit: type=1103 audit(1734100384.889:424): pid=5481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:04.917024 kernel: audit: type=1006 audit(1734100384.889:425): pid=5481 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 13 14:33:04.917061 kernel: audit: type=1300 audit(1734100384.889:425): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff68621240 a2=3 a3=0 items=0 ppid=1 pid=5481 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:04.889000 audit[5481]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff68621240 a2=3 a3=0 items=0 ppid=1 pid=5481 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:04.889000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:04.939434 kernel: audit: type=1327 audit(1734100384.889:425): proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:04.942749 kernel: audit: type=1105 audit(1734100384.901:426): pid=5481 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:04.901000 audit[5481]: USER_START pid=5481 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:04.903000 audit[5484]: CRED_ACQ pid=5484 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:04.960748 kernel: audit: type=1103 audit(1734100384.903:427): pid=5484 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:05.541585 sshd[5481]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:05.542000 audit[5481]: USER_END pid=5481 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:05.544526 systemd[1]: sshd@7-10.200.8.17:22-10.200.16.10:54654.service: Deactivated successfully. Dec 13 14:33:05.545394 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:33:05.547400 systemd-logind[1505]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:33:05.548354 systemd-logind[1505]: Removed session 10. Dec 13 14:33:05.542000 audit[5481]: CRED_DISP pid=5481 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:05.576602 kernel: audit: type=1106 audit(1734100385.542:428): pid=5481 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:05.576704 kernel: audit: type=1104 audit(1734100385.542:429): pid=5481 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:05.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.17:22-10.200.16.10:54654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:09.403915 systemd[1]: run-containerd-runc-k8s.io-d90320f269d04825386e2fb516d0b3c126aee2a69f52f0015f08ab64596e3bc5-runc.dzgPdD.mount: Deactivated successfully. Dec 13 14:33:10.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.17:22-10.200.16.10:37896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:10.661586 systemd[1]: Started sshd@8-10.200.8.17:22-10.200.16.10:37896.service. Dec 13 14:33:10.667252 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:33:10.667336 kernel: audit: type=1130 audit(1734100390.661:431): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.17:22-10.200.16.10:37896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:11.368000 audit[5517]: USER_ACCT pid=5517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:11.369495 sshd[5517]: Accepted publickey for core from 10.200.16.10 port 37896 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:11.387745 kernel: audit: type=1101 audit(1734100391.368:432): pid=5517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:11.388297 sshd[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:11.386000 audit[5517]: CRED_ACQ pid=5517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:11.394420 systemd-logind[1505]: New session 11 of user core. Dec 13 14:33:11.395172 systemd[1]: Started session-11.scope. Dec 13 14:33:11.405742 kernel: audit: type=1103 audit(1734100391.386:433): pid=5517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:11.417828 kernel: audit: type=1006 audit(1734100391.387:434): pid=5517 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 13 14:33:11.387000 audit[5517]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe01b34b60 a2=3 a3=0 items=0 ppid=1 pid=5517 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:11.387000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:11.440112 kernel: audit: type=1300 audit(1734100391.387:434): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe01b34b60 a2=3 a3=0 items=0 ppid=1 pid=5517 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:11.440196 kernel: audit: type=1327 audit(1734100391.387:434): proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:11.440225 kernel: audit: type=1105 audit(1734100391.400:435): pid=5517 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:11.400000 audit[5517]: USER_START pid=5517 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:11.407000 audit[5520]: CRED_ACQ pid=5520 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:11.458756 kernel: audit: type=1103 audit(1734100391.407:436): pid=5520 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:11.928806 sshd[5517]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:11.930000 audit[5517]: USER_END pid=5517 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:11.933815 systemd-logind[1505]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:33:11.935181 systemd[1]: sshd@8-10.200.8.17:22-10.200.16.10:37896.service: Deactivated successfully. Dec 13 14:33:11.936016 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:33:11.937383 systemd-logind[1505]: Removed session 11. Dec 13 14:33:11.949748 kernel: audit: type=1106 audit(1734100391.930:437): pid=5517 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:11.930000 audit[5517]: CRED_DISP pid=5517 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:11.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.17:22-10.200.16.10:37896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:11.965773 kernel: audit: type=1104 audit(1734100391.930:438): pid=5517 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:17.046629 systemd[1]: Started sshd@9-10.200.8.17:22-10.200.16.10:37902.service. Dec 13 14:33:17.058167 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:33:17.058251 kernel: audit: type=1130 audit(1734100397.046:440): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.17:22-10.200.16.10:37902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.17:22-10.200.16.10:37902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:17.753000 audit[5533]: USER_ACCT pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:17.754046 sshd[5533]: Accepted publickey for core from 10.200.16.10 port 37902 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:17.771749 kernel: audit: type=1101 audit(1734100397.753:441): pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:17.771000 audit[5533]: CRED_ACQ pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:17.772783 sshd[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:17.782536 systemd[1]: Started session-12.scope. Dec 13 14:33:17.783638 systemd-logind[1505]: New session 12 of user core. Dec 13 14:33:17.790743 kernel: audit: type=1103 audit(1734100397.771:442): pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:17.771000 audit[5533]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd18745650 a2=3 a3=0 items=0 ppid=1 pid=5533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:17.818343 kernel: audit: type=1006 audit(1734100397.771:443): pid=5533 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 13 14:33:17.818428 kernel: audit: type=1300 audit(1734100397.771:443): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd18745650 a2=3 a3=0 items=0 ppid=1 pid=5533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:17.818457 kernel: audit: type=1327 audit(1734100397.771:443): proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:17.771000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:17.787000 audit[5533]: USER_START pid=5533 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:17.842675 kernel: audit: type=1105 audit(1734100397.787:444): pid=5533 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:17.842778 kernel: audit: type=1103 audit(1734100397.790:445): pid=5536 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:17.790000 audit[5536]: CRED_ACQ pid=5536 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:18.316311 sshd[5533]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:18.317000 audit[5533]: USER_END pid=5533 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:18.319926 systemd[1]: sshd@9-10.200.8.17:22-10.200.16.10:37902.service: Deactivated successfully. Dec 13 14:33:18.320954 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:33:18.327844 systemd-logind[1505]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:33:18.328804 systemd-logind[1505]: Removed session 12. Dec 13 14:33:18.317000 audit[5533]: CRED_DISP pid=5533 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:18.350830 kernel: audit: type=1106 audit(1734100398.317:446): pid=5533 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:18.350935 kernel: audit: type=1104 audit(1734100398.317:447): pid=5533 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:18.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.17:22-10.200.16.10:37902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:18.434259 systemd[1]: Started sshd@10-10.200.8.17:22-10.200.16.10:37910.service. Dec 13 14:33:18.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.17:22-10.200.16.10:37910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:19.137000 audit[5546]: USER_ACCT pid=5546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:19.138823 sshd[5546]: Accepted publickey for core from 10.200.16.10 port 37910 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:19.139000 audit[5546]: CRED_ACQ pid=5546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:19.139000 audit[5546]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9db634e0 a2=3 a3=0 items=0 ppid=1 pid=5546 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:19.139000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:19.140664 sshd[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:19.146519 systemd-logind[1505]: New session 13 of user core. Dec 13 14:33:19.147422 systemd[1]: Started session-13.scope. Dec 13 14:33:19.155000 audit[5546]: USER_START pid=5546 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:19.156000 audit[5549]: CRED_ACQ pid=5549 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:19.730144 sshd[5546]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:19.731000 audit[5546]: USER_END pid=5546 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:19.731000 audit[5546]: CRED_DISP pid=5546 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:19.733842 systemd[1]: sshd@10-10.200.8.17:22-10.200.16.10:37910.service: Deactivated successfully. Dec 13 14:33:19.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.17:22-10.200.16.10:37910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:19.735534 systemd-logind[1505]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:33:19.735603 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:33:19.737707 systemd-logind[1505]: Removed session 13. Dec 13 14:33:19.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.17:22-10.200.16.10:34146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:19.847582 systemd[1]: Started sshd@11-10.200.8.17:22-10.200.16.10:34146.service. Dec 13 14:33:20.553000 audit[5557]: USER_ACCT pid=5557 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:20.554285 sshd[5557]: Accepted publickey for core from 10.200.16.10 port 34146 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:20.554000 audit[5557]: CRED_ACQ pid=5557 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:20.554000 audit[5557]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc237fa980 a2=3 a3=0 items=0 ppid=1 pid=5557 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:20.554000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:20.555781 sshd[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:20.560795 systemd-logind[1505]: New session 14 of user core. Dec 13 14:33:20.561011 systemd[1]: Started session-14.scope. Dec 13 14:33:20.566000 audit[5557]: USER_START pid=5557 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:20.568000 audit[5560]: CRED_ACQ pid=5560 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:21.120839 sshd[5557]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:21.121000 audit[5557]: USER_END pid=5557 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:21.121000 audit[5557]: CRED_DISP pid=5557 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:21.124448 systemd[1]: sshd@11-10.200.8.17:22-10.200.16.10:34146.service: Deactivated successfully. Dec 13 14:33:21.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.17:22-10.200.16.10:34146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:21.126282 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:33:21.126885 systemd-logind[1505]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:33:21.128580 systemd-logind[1505]: Removed session 14. Dec 13 14:33:26.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.17:22-10.200.16.10:34160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.239223 systemd[1]: Started sshd@12-10.200.8.17:22-10.200.16.10:34160.service. Dec 13 14:33:26.261648 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 14:33:26.261783 kernel: audit: type=1130 audit(1734100406.237:467): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.17:22-10.200.16.10:34160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:26.945000 audit[5569]: USER_ACCT pid=5569 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:26.965849 kernel: audit: type=1101 audit(1734100406.945:468): pid=5569 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:26.965911 sshd[5569]: Accepted publickey for core from 10.200.16.10 port 34160 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:26.964000 audit[5569]: CRED_ACQ pid=5569 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:26.966916 sshd[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:26.976035 systemd[1]: Started session-15.scope. Dec 13 14:33:26.977248 systemd-logind[1505]: New session 15 of user core. Dec 13 14:33:26.985755 kernel: audit: type=1103 audit(1734100406.964:469): pid=5569 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:26.964000 audit[5569]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff00549960 a2=3 a3=0 items=0 ppid=1 pid=5569 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:26.996737 kernel: audit: type=1006 audit(1734100406.964:470): pid=5569 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 13 14:33:26.996781 kernel: audit: type=1300 audit(1734100406.964:470): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff00549960 a2=3 a3=0 items=0 ppid=1 pid=5569 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:26.964000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:27.014209 kernel: audit: type=1327 audit(1734100406.964:470): proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:26.981000 audit[5569]: USER_START pid=5569 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:26.987000 audit[5572]: CRED_ACQ pid=5572 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:27.054151 kernel: audit: type=1105 audit(1734100406.981:471): pid=5569 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:27.054246 kernel: audit: type=1103 audit(1734100406.987:472): pid=5572 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:27.503927 sshd[5569]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:27.504575 systemd[1]: run-containerd-runc-k8s.io-0c8807f5ea6dbd33b8c31a67d5730c9cc8fe2aec7030aae3b7f858606b495407-runc.1t9E7R.mount: Deactivated successfully. Dec 13 14:33:27.528316 kernel: audit: type=1106 audit(1734100407.504:473): pid=5569 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:27.504000 audit[5569]: USER_END pid=5569 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:27.508493 systemd[1]: sshd@12-10.200.8.17:22-10.200.16.10:34160.service: Deactivated successfully. Dec 13 14:33:27.511183 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:33:27.511760 systemd-logind[1505]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:33:27.516868 systemd-logind[1505]: Removed session 15. Dec 13 14:33:27.505000 audit[5569]: CRED_DISP pid=5569 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:27.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.17:22-10.200.16.10:34160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:27.562850 kernel: audit: type=1104 audit(1734100407.505:474): pid=5569 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:28.284758 systemd[1]: run-containerd-runc-k8s.io-d90320f269d04825386e2fb516d0b3c126aee2a69f52f0015f08ab64596e3bc5-runc.EceiKm.mount: Deactivated successfully. Dec 13 14:33:32.642410 systemd[1]: Started sshd@13-10.200.8.17:22-10.200.16.10:57476.service. Dec 13 14:33:32.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.17:22-10.200.16.10:57476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:32.648349 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:33:32.648448 kernel: audit: type=1130 audit(1734100412.641:476): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.17:22-10.200.16.10:57476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:33.349000 audit[5627]: USER_ACCT pid=5627 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:33.369588 sshd[5627]: Accepted publickey for core from 10.200.16.10 port 57476 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:33.370130 kernel: audit: type=1101 audit(1734100413.349:477): pid=5627 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:33.368000 audit[5627]: CRED_ACQ pid=5627 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:33.371025 sshd[5627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:33.378332 systemd[1]: Started session-16.scope. Dec 13 14:33:33.379250 systemd-logind[1505]: New session 16 of user core. Dec 13 14:33:33.388742 kernel: audit: type=1103 audit(1734100413.368:478): pid=5627 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:33.398757 kernel: audit: type=1006 audit(1734100413.368:479): pid=5627 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 13 14:33:33.398843 kernel: audit: type=1300 audit(1734100413.368:479): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc6a7bdfc0 a2=3 a3=0 items=0 ppid=1 pid=5627 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:33.368000 audit[5627]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc6a7bdfc0 a2=3 a3=0 items=0 ppid=1 pid=5627 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:33.368000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:33.421108 kernel: audit: type=1327 audit(1734100413.368:479): proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:33.421176 kernel: audit: type=1105 audit(1734100413.387:480): pid=5627 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:33.387000 audit[5627]: USER_START pid=5627 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:33.387000 audit[5630]: CRED_ACQ pid=5630 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:33.453920 kernel: audit: type=1103 audit(1734100413.387:481): pid=5630 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:33.905493 sshd[5627]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:33.905000 audit[5627]: USER_END pid=5627 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:33.909164 systemd[1]: sshd@13-10.200.8.17:22-10.200.16.10:57476.service: Deactivated successfully. Dec 13 14:33:33.910002 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:33:33.916145 systemd-logind[1505]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:33:33.917096 systemd-logind[1505]: Removed session 16. Dec 13 14:33:33.906000 audit[5627]: CRED_DISP pid=5627 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:33.940699 kernel: audit: type=1106 audit(1734100413.905:482): pid=5627 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:33.940830 kernel: audit: type=1104 audit(1734100413.906:483): pid=5627 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:33.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.17:22-10.200.16.10:57476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:39.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.17:22-10.200.16.10:49352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:39.023998 systemd[1]: Started sshd@14-10.200.8.17:22-10.200.16.10:49352.service. Dec 13 14:33:39.029840 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:33:39.029950 kernel: audit: type=1130 audit(1734100419.022:485): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.17:22-10.200.16.10:49352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:39.389960 systemd[1]: run-containerd-runc-k8s.io-d90320f269d04825386e2fb516d0b3c126aee2a69f52f0015f08ab64596e3bc5-runc.9BJecR.mount: Deactivated successfully. Dec 13 14:33:39.755744 kernel: audit: type=1101 audit(1734100419.736:486): pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:39.736000 audit[5640]: USER_ACCT pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:39.756072 sshd[5640]: Accepted publickey for core from 10.200.16.10 port 49352 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:39.756372 sshd[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:39.754000 audit[5640]: CRED_ACQ pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:39.762753 systemd[1]: Started session-17.scope. Dec 13 14:33:39.763756 systemd-logind[1505]: New session 17 of user core. Dec 13 14:33:39.775159 kernel: audit: type=1103 audit(1734100419.754:487): pid=5640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:39.754000 audit[5640]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcee05fd80 a2=3 a3=0 items=0 ppid=1 pid=5640 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.801290 kernel: audit: type=1006 audit(1734100419.754:488): pid=5640 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 13 14:33:39.801430 kernel: audit: type=1300 audit(1734100419.754:488): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcee05fd80 a2=3 a3=0 items=0 ppid=1 pid=5640 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:39.801461 kernel: audit: type=1327 audit(1734100419.754:488): proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:39.754000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:39.773000 audit[5640]: USER_START pid=5640 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:39.824676 kernel: audit: type=1105 audit(1734100419.773:489): pid=5640 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:39.824761 kernel: audit: type=1103 audit(1734100419.773:490): pid=5663 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:39.773000 audit[5663]: CRED_ACQ pid=5663 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:40.295764 sshd[5640]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:40.295000 audit[5640]: USER_END pid=5640 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:40.299266 systemd[1]: sshd@14-10.200.8.17:22-10.200.16.10:49352.service: Deactivated successfully. Dec 13 14:33:40.300277 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:33:40.307220 systemd-logind[1505]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:33:40.308185 systemd-logind[1505]: Removed session 17. Dec 13 14:33:40.295000 audit[5640]: CRED_DISP pid=5640 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:40.331101 kernel: audit: type=1106 audit(1734100420.295:491): pid=5640 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:40.331169 kernel: audit: type=1104 audit(1734100420.295:492): pid=5640 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:40.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.17:22-10.200.16.10:49352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:40.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.17:22-10.200.16.10:49358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:40.413284 systemd[1]: Started sshd@15-10.200.8.17:22-10.200.16.10:49358.service. Dec 13 14:33:41.117000 audit[5673]: USER_ACCT pid=5673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:41.119296 sshd[5673]: Accepted publickey for core from 10.200.16.10 port 49358 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:41.118000 audit[5673]: CRED_ACQ pid=5673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:41.118000 audit[5673]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7361b0f0 a2=3 a3=0 items=0 ppid=1 pid=5673 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:41.118000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:41.121028 sshd[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:41.126446 systemd[1]: Started session-18.scope. Dec 13 14:33:41.127416 systemd-logind[1505]: New session 18 of user core. Dec 13 14:33:41.131000 audit[5673]: USER_START pid=5673 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:41.133000 audit[5676]: CRED_ACQ pid=5676 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:41.754295 sshd[5673]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:41.754000 audit[5673]: USER_END pid=5673 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:41.754000 audit[5673]: CRED_DISP pid=5673 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:41.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.17:22-10.200.16.10:49358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:41.758969 systemd[1]: sshd@15-10.200.8.17:22-10.200.16.10:49358.service: Deactivated successfully. Dec 13 14:33:41.760288 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:33:41.761369 systemd-logind[1505]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:33:41.762803 systemd-logind[1505]: Removed session 18. Dec 13 14:33:41.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.17:22-10.200.16.10:49372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:41.872316 systemd[1]: Started sshd@16-10.200.8.17:22-10.200.16.10:49372.service. Dec 13 14:33:42.577000 audit[5684]: USER_ACCT pid=5684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:42.579344 sshd[5684]: Accepted publickey for core from 10.200.16.10 port 49372 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:42.578000 audit[5684]: CRED_ACQ pid=5684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:42.578000 audit[5684]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda8c5e2e0 a2=3 a3=0 items=0 ppid=1 pid=5684 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:42.578000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:42.580906 sshd[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:42.586220 systemd[1]: Started session-19.scope. Dec 13 14:33:42.586974 systemd-logind[1505]: New session 19 of user core. Dec 13 14:33:42.591000 audit[5684]: USER_START pid=5684 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:42.593000 audit[5687]: CRED_ACQ pid=5687 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:44.530000 audit[5699]: NETFILTER_CFG table=filter:124 family=2 entries=20 op=nft_register_rule pid=5699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:44.569159 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 13 14:33:44.569273 kernel: audit: type=1325 audit(1734100424.530:509): table=filter:124 family=2 entries=20 op=nft_register_rule pid=5699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:44.569310 kernel: audit: type=1300 audit(1734100424.530:509): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc4f577c00 a2=0 a3=7ffc4f577bec items=0 ppid=2867 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:44.569342 kernel: audit: type=1327 audit(1734100424.530:509): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:44.530000 audit[5699]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc4f577c00 a2=0 a3=7ffc4f577bec items=0 ppid=2867 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:44.530000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:44.587000 audit[5699]: NETFILTER_CFG table=nat:125 family=2 entries=22 op=nft_register_rule pid=5699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:44.599750 kernel: audit: type=1325 audit(1734100424.587:510): table=nat:125 family=2 entries=22 op=nft_register_rule pid=5699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:44.587000 audit[5699]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc4f577c00 a2=0 a3=0 items=0 ppid=2867 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:44.587000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:44.630730 kernel: audit: type=1300 audit(1734100424.587:510): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc4f577c00 a2=0 a3=0 items=0 ppid=2867 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:44.630825 kernel: audit: type=1327 audit(1734100424.587:510): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:44.633000 audit[5701]: NETFILTER_CFG table=filter:126 family=2 entries=32 op=nft_register_rule pid=5701 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:44.633000 audit[5701]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffcc3346a90 a2=0 a3=7ffcc3346a7c items=0 ppid=2867 pid=5701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:44.664706 kernel: audit: type=1325 audit(1734100424.633:511): table=filter:126 family=2 entries=32 op=nft_register_rule pid=5701 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:44.664799 kernel: audit: type=1300 audit(1734100424.633:511): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffcc3346a90 a2=0 a3=7ffcc3346a7c items=0 ppid=2867 pid=5701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:44.677459 kernel: audit: type=1327 audit(1734100424.633:511): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:44.633000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:44.669175 systemd-logind[1505]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:33:44.666384 sshd[5684]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:44.670215 systemd[1]: sshd@16-10.200.8.17:22-10.200.16.10:49372.service: Deactivated successfully. Dec 13 14:33:44.671018 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:33:44.672152 systemd-logind[1505]: Removed session 19. Dec 13 14:33:44.644000 audit[5701]: NETFILTER_CFG table=nat:127 family=2 entries=22 op=nft_register_rule pid=5701 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:44.644000 audit[5701]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffcc3346a90 a2=0 a3=0 items=0 ppid=2867 pid=5701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:44.644000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:44.665000 audit[5684]: USER_END pid=5684 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:44.665000 audit[5684]: CRED_DISP pid=5684 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:44.688758 kernel: audit: type=1325 audit(1734100424.644:512): table=nat:127 family=2 entries=22 op=nft_register_rule pid=5701 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:44.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.17:22-10.200.16.10:49372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:44.775203 systemd[1]: Started sshd@17-10.200.8.17:22-10.200.16.10:49388.service. Dec 13 14:33:44.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.17:22-10.200.16.10:49388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:45.481000 audit[5704]: USER_ACCT pid=5704 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:45.483471 sshd[5704]: Accepted publickey for core from 10.200.16.10 port 49388 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:45.482000 audit[5704]: CRED_ACQ pid=5704 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:45.483000 audit[5704]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc2f160e0 a2=3 a3=0 items=0 ppid=1 pid=5704 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:45.483000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:45.485484 sshd[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:45.493174 systemd[1]: Started session-20.scope. Dec 13 14:33:45.495026 systemd-logind[1505]: New session 20 of user core. Dec 13 14:33:45.504000 audit[5704]: USER_START pid=5704 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:45.506000 audit[5710]: CRED_ACQ pid=5710 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:46.154448 sshd[5704]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:46.154000 audit[5704]: USER_END pid=5704 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:46.154000 audit[5704]: CRED_DISP pid=5704 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:46.158062 systemd[1]: sshd@17-10.200.8.17:22-10.200.16.10:49388.service: Deactivated successfully. Dec 13 14:33:46.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.17:22-10.200.16.10:49388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:46.159520 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:33:46.159523 systemd-logind[1505]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:33:46.160726 systemd-logind[1505]: Removed session 20. Dec 13 14:33:46.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.17:22-10.200.16.10:49402 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:46.272670 systemd[1]: Started sshd@18-10.200.8.17:22-10.200.16.10:49402.service. Dec 13 14:33:46.977000 audit[5718]: USER_ACCT pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:46.979641 sshd[5718]: Accepted publickey for core from 10.200.16.10 port 49402 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:46.979000 audit[5718]: CRED_ACQ pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:46.979000 audit[5718]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe55609350 a2=3 a3=0 items=0 ppid=1 pid=5718 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:46.979000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:46.981310 sshd[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:46.987200 systemd[1]: Started session-21.scope. Dec 13 14:33:46.987944 systemd-logind[1505]: New session 21 of user core. Dec 13 14:33:46.992000 audit[5718]: USER_START pid=5718 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:46.993000 audit[5721]: CRED_ACQ pid=5721 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:47.535693 sshd[5718]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:47.535000 audit[5718]: USER_END pid=5718 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:47.535000 audit[5718]: CRED_DISP pid=5718 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:47.538868 systemd-logind[1505]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:33:47.539068 systemd[1]: sshd@18-10.200.8.17:22-10.200.16.10:49402.service: Deactivated successfully. Dec 13 14:33:47.540183 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:33:47.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.17:22-10.200.16.10:49402 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:47.540714 systemd-logind[1505]: Removed session 21. Dec 13 14:33:52.654631 systemd[1]: Started sshd@19-10.200.8.17:22-10.200.16.10:58126.service. Dec 13 14:33:52.678222 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 13 14:33:52.678334 kernel: audit: type=1130 audit(1734100432.653:534): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.17:22-10.200.16.10:58126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.17:22-10.200.16.10:58126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:53.359000 audit[5730]: USER_ACCT pid=5730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:53.361059 sshd[5730]: Accepted publickey for core from 10.200.16.10 port 58126 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:53.379749 kernel: audit: type=1101 audit(1734100433.359:535): pid=5730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:53.378000 audit[5730]: CRED_ACQ pid=5730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:53.380502 sshd[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:53.386087 systemd[1]: Started session-22.scope. Dec 13 14:33:53.386711 systemd-logind[1505]: New session 22 of user core. Dec 13 14:33:53.409178 kernel: audit: type=1103 audit(1734100433.378:536): pid=5730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:53.409261 kernel: audit: type=1006 audit(1734100433.378:537): pid=5730 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 13 14:33:53.409290 kernel: audit: type=1300 audit(1734100433.378:537): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc30be0e90 a2=3 a3=0 items=0 ppid=1 pid=5730 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:53.378000 audit[5730]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc30be0e90 a2=3 a3=0 items=0 ppid=1 pid=5730 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:53.428752 kernel: audit: type=1327 audit(1734100433.378:537): proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:53.378000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:53.390000 audit[5730]: USER_START pid=5730 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:53.431734 kernel: audit: type=1105 audit(1734100433.390:538): pid=5730 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:53.392000 audit[5733]: CRED_ACQ pid=5733 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:53.463441 kernel: audit: type=1103 audit(1734100433.392:539): pid=5733 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:53.914358 sshd[5730]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:53.914000 audit[5730]: USER_END pid=5730 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:53.918788 systemd-logind[1505]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:33:53.919996 systemd[1]: sshd@19-10.200.8.17:22-10.200.16.10:58126.service: Deactivated successfully. Dec 13 14:33:53.920800 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:33:53.921920 systemd-logind[1505]: Removed session 22. Dec 13 14:33:53.914000 audit[5730]: CRED_DISP pid=5730 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:53.949507 kernel: audit: type=1106 audit(1734100433.914:540): pid=5730 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:53.949602 kernel: audit: type=1104 audit(1734100433.914:541): pid=5730 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:53.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.17:22-10.200.16.10:58126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:55.592000 audit[5744]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5744 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:55.592000 audit[5744]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe1c98cdc0 a2=0 a3=7ffe1c98cdac items=0 ppid=2867 pid=5744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:55.592000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:55.598000 audit[5744]: NETFILTER_CFG table=nat:129 family=2 entries=106 op=nft_register_chain pid=5744 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:33:55.598000 audit[5744]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffe1c98cdc0 a2=0 a3=7ffe1c98cdac items=0 ppid=2867 pid=5744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:55.598000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:33:57.494381 systemd[1]: run-containerd-runc-k8s.io-0c8807f5ea6dbd33b8c31a67d5730c9cc8fe2aec7030aae3b7f858606b495407-runc.4gQOTN.mount: Deactivated successfully. Dec 13 14:33:59.059422 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 14:33:59.059567 kernel: audit: type=1130 audit(1734100439.030:545): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.17:22-10.200.16.10:50122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:59.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.17:22-10.200.16.10:50122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:59.032075 systemd[1]: Started sshd@20-10.200.8.17:22-10.200.16.10:50122.service. Dec 13 14:33:59.742000 audit[5767]: USER_ACCT pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:59.744351 sshd[5767]: Accepted publickey for core from 10.200.16.10 port 50122 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:59.761000 audit[5767]: CRED_ACQ pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:59.763784 sshd[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:59.769404 systemd[1]: Started session-23.scope. Dec 13 14:33:59.770707 systemd-logind[1505]: New session 23 of user core. Dec 13 14:33:59.781528 kernel: audit: type=1101 audit(1734100439.742:546): pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:59.781625 kernel: audit: type=1103 audit(1734100439.761:547): pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:59.761000 audit[5767]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd80dadac0 a2=3 a3=0 items=0 ppid=1 pid=5767 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:59.807683 kernel: audit: type=1006 audit(1734100439.761:548): pid=5767 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 13 14:33:59.807785 kernel: audit: type=1300 audit(1734100439.761:548): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd80dadac0 a2=3 a3=0 items=0 ppid=1 pid=5767 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:59.761000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:59.808778 kernel: audit: type=1327 audit(1734100439.761:548): proctitle=737368643A20636F7265205B707269765D Dec 13 14:33:59.773000 audit[5767]: USER_START pid=5767 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:59.831272 kernel: audit: type=1105 audit(1734100439.773:549): pid=5767 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:59.831974 kernel: audit: type=1103 audit(1734100439.785:550): pid=5770 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:33:59.785000 audit[5770]: CRED_ACQ pid=5770 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:00.304705 sshd[5767]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:00.304000 audit[5767]: USER_END pid=5767 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:00.308200 systemd-logind[1505]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:34:00.309638 systemd[1]: sshd@20-10.200.8.17:22-10.200.16.10:50122.service: Deactivated successfully. Dec 13 14:34:00.310456 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:34:00.311896 systemd-logind[1505]: Removed session 23. Dec 13 14:34:00.324741 kernel: audit: type=1106 audit(1734100440.304:551): pid=5767 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:00.304000 audit[5767]: CRED_DISP pid=5767 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:00.341740 kernel: audit: type=1104 audit(1734100440.304:552): pid=5767 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:00.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.17:22-10.200.16.10:50122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:05.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.17:22-10.200.16.10:50130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:05.422987 systemd[1]: Started sshd@21-10.200.8.17:22-10.200.16.10:50130.service. Dec 13 14:34:05.427854 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:34:05.427951 kernel: audit: type=1130 audit(1734100445.422:554): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.17:22-10.200.16.10:50130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:06.130000 audit[5786]: USER_ACCT pid=5786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:06.149579 sshd[5786]: Accepted publickey for core from 10.200.16.10 port 50130 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:34:06.149997 kernel: audit: type=1101 audit(1734100446.130:555): pid=5786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:06.149945 sshd[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:06.148000 audit[5786]: CRED_ACQ pid=5786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:06.155277 systemd[1]: Started session-24.scope. Dec 13 14:34:06.156373 systemd-logind[1505]: New session 24 of user core. Dec 13 14:34:06.178299 kernel: audit: type=1103 audit(1734100446.148:556): pid=5786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:06.178400 kernel: audit: type=1006 audit(1734100446.148:557): pid=5786 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 13 14:34:06.181761 kernel: audit: type=1300 audit(1734100446.148:557): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd099df630 a2=3 a3=0 items=0 ppid=1 pid=5786 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:34:06.148000 audit[5786]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd099df630 a2=3 a3=0 items=0 ppid=1 pid=5786 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:34:06.148000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:34:06.202740 kernel: audit: type=1327 audit(1734100446.148:557): proctitle=737368643A20636F7265205B707269765D Dec 13 14:34:06.202821 kernel: audit: type=1105 audit(1734100446.160:558): pid=5786 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:06.160000 audit[5786]: USER_START pid=5786 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:06.162000 audit[5788]: CRED_ACQ pid=5788 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:06.235237 kernel: audit: type=1103 audit(1734100446.162:559): pid=5788 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:06.684207 sshd[5786]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:06.685000 audit[5786]: USER_END pid=5786 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:06.690927 systemd[1]: sshd@21-10.200.8.17:22-10.200.16.10:50130.service: Deactivated successfully. Dec 13 14:34:06.691845 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:34:06.698103 systemd-logind[1505]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:34:06.699066 systemd-logind[1505]: Removed session 24. Dec 13 14:34:06.688000 audit[5786]: CRED_DISP pid=5786 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:06.721215 kernel: audit: type=1106 audit(1734100446.685:560): pid=5786 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:06.721374 kernel: audit: type=1104 audit(1734100446.688:561): pid=5786 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:06.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.17:22-10.200.16.10:50130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:09.399513 systemd[1]: run-containerd-runc-k8s.io-d90320f269d04825386e2fb516d0b3c126aee2a69f52f0015f08ab64596e3bc5-runc.Zy76uX.mount: Deactivated successfully. Dec 13 14:34:11.825060 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:34:11.825165 kernel: audit: type=1130 audit(1734100451.802:563): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.17:22-10.200.16.10:45052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:11.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.17:22-10.200.16.10:45052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:11.802995 systemd[1]: Started sshd@22-10.200.8.17:22-10.200.16.10:45052.service. Dec 13 14:34:12.509000 audit[5824]: USER_ACCT pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:12.528991 sshd[5824]: Accepted publickey for core from 10.200.16.10 port 45052 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:34:12.529750 kernel: audit: type=1101 audit(1734100452.509:564): pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:12.530000 audit[5824]: CRED_ACQ pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:12.537365 systemd[1]: Started session-25.scope. Dec 13 14:34:12.531694 sshd[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:12.538450 systemd-logind[1505]: New session 25 of user core. Dec 13 14:34:12.548742 kernel: audit: type=1103 audit(1734100452.530:565): pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:12.530000 audit[5824]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3653ee40 a2=3 a3=0 items=0 ppid=1 pid=5824 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:34:12.588513 kernel: audit: type=1006 audit(1734100452.530:566): pid=5824 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 13 14:34:12.588670 kernel: audit: type=1300 audit(1734100452.530:566): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3653ee40 a2=3 a3=0 items=0 ppid=1 pid=5824 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:34:12.530000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:34:12.543000 audit[5824]: USER_START pid=5824 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:12.614762 kernel: audit: type=1327 audit(1734100452.530:566): proctitle=737368643A20636F7265205B707269765D Dec 13 14:34:12.614843 kernel: audit: type=1105 audit(1734100452.543:567): pid=5824 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:12.614873 kernel: audit: type=1103 audit(1734100452.550:568): pid=5826 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:12.550000 audit[5826]: CRED_ACQ pid=5826 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:13.074339 sshd[5824]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:13.075000 audit[5824]: USER_END pid=5824 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:13.077948 systemd-logind[1505]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:34:13.079381 systemd[1]: sshd@22-10.200.8.17:22-10.200.16.10:45052.service: Deactivated successfully. Dec 13 14:34:13.080199 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:34:13.081869 systemd-logind[1505]: Removed session 25. Dec 13 14:34:13.095749 kernel: audit: type=1106 audit(1734100453.075:569): pid=5824 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:13.095839 kernel: audit: type=1104 audit(1734100453.075:570): pid=5824 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:13.075000 audit[5824]: CRED_DISP pid=5824 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:13.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.17:22-10.200.16.10:45052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:18.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.17:22-10.200.16.10:45056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:18.192643 systemd[1]: Started sshd@23-10.200.8.17:22-10.200.16.10:45056.service. Dec 13 14:34:18.197671 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:34:18.197772 kernel: audit: type=1130 audit(1734100458.192:572): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.17:22-10.200.16.10:45056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:18.901000 audit[5839]: USER_ACCT pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:18.922463 kernel: audit: type=1101 audit(1734100458.901:573): pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:18.922170 sshd[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:18.922982 sshd[5839]: Accepted publickey for core from 10.200.16.10 port 45056 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:34:18.920000 audit[5839]: CRED_ACQ pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:18.941053 systemd[1]: Started session-26.scope. Dec 13 14:34:18.942977 kernel: audit: type=1103 audit(1734100458.920:574): pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:18.943068 kernel: audit: type=1006 audit(1734100458.921:575): pid=5839 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 13 14:34:18.942390 systemd-logind[1505]: New session 26 of user core. Dec 13 14:34:18.921000 audit[5839]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc987c8440 a2=3 a3=0 items=0 ppid=1 pid=5839 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:34:18.921000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:34:18.977776 kernel: audit: type=1300 audit(1734100458.921:575): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc987c8440 a2=3 a3=0 items=0 ppid=1 pid=5839 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:34:18.977869 kernel: audit: type=1327 audit(1734100458.921:575): proctitle=737368643A20636F7265205B707269765D Dec 13 14:34:18.977901 kernel: audit: type=1105 audit(1734100458.947:576): pid=5839 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:18.947000 audit[5839]: USER_START pid=5839 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:18.947000 audit[5842]: CRED_ACQ pid=5842 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:19.010635 kernel: audit: type=1103 audit(1734100458.947:577): pid=5842 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:19.459912 sshd[5839]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:19.461000 audit[5839]: USER_END pid=5839 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:19.464000 systemd-logind[1505]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:34:19.465679 systemd[1]: sshd@23-10.200.8.17:22-10.200.16.10:45056.service: Deactivated successfully. Dec 13 14:34:19.466575 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:34:19.467997 systemd-logind[1505]: Removed session 26. Dec 13 14:34:19.461000 audit[5839]: CRED_DISP pid=5839 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:19.500580 kernel: audit: type=1106 audit(1734100459.461:578): pid=5839 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:19.500662 kernel: audit: type=1104 audit(1734100459.461:579): pid=5839 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:19.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.17:22-10.200.16.10:45056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:24.577500 systemd[1]: Started sshd@24-10.200.8.17:22-10.200.16.10:49972.service. Dec 13 14:34:24.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.17:22-10.200.16.10:49972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:24.582248 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:34:24.582343 kernel: audit: type=1130 audit(1734100464.577:581): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.17:22-10.200.16.10:49972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:25.290000 audit[5864]: USER_ACCT pid=5864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:25.291084 sshd[5864]: Accepted publickey for core from 10.200.16.10 port 49972 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:34:25.309000 audit[5864]: CRED_ACQ pid=5864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:25.310684 sshd[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:25.316973 systemd-logind[1505]: New session 27 of user core. Dec 13 14:34:25.317573 systemd[1]: Started session-27.scope. Dec 13 14:34:25.327241 kernel: audit: type=1101 audit(1734100465.290:582): pid=5864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:25.327335 kernel: audit: type=1103 audit(1734100465.309:583): pid=5864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:25.309000 audit[5864]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd545b1660 a2=3 a3=0 items=0 ppid=1 pid=5864 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:34:25.355576 kernel: audit: type=1006 audit(1734100465.309:584): pid=5864 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 13 14:34:25.355668 kernel: audit: type=1300 audit(1734100465.309:584): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd545b1660 a2=3 a3=0 items=0 ppid=1 pid=5864 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:34:25.355699 kernel: audit: type=1327 audit(1734100465.309:584): proctitle=737368643A20636F7265205B707269765D Dec 13 14:34:25.309000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:34:25.322000 audit[5864]: USER_START pid=5864 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:25.378962 kernel: audit: type=1105 audit(1734100465.322:585): pid=5864 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:25.379050 kernel: audit: type=1103 audit(1734100465.327:586): pid=5866 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:25.327000 audit[5866]: CRED_ACQ pid=5866 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:25.847476 sshd[5864]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:25.848000 audit[5864]: USER_END pid=5864 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:25.850962 systemd-logind[1505]: Session 27 logged out. Waiting for processes to exit. Dec 13 14:34:25.852482 systemd[1]: sshd@24-10.200.8.17:22-10.200.16.10:49972.service: Deactivated successfully. Dec 13 14:34:25.853372 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 14:34:25.854542 systemd-logind[1505]: Removed session 27. Dec 13 14:34:25.848000 audit[5864]: CRED_DISP pid=5864 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:25.884194 kernel: audit: type=1106 audit(1734100465.848:587): pid=5864 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:25.884310 kernel: audit: type=1104 audit(1734100465.848:588): pid=5864 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 13 14:34:25.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.17:22-10.200.16.10:49972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:27.495926 systemd[1]: run-containerd-runc-k8s.io-0c8807f5ea6dbd33b8c31a67d5730c9cc8fe2aec7030aae3b7f858606b495407-runc.qdIWMr.mount: Deactivated successfully.