Dec 13 14:24:53.130406 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:24:53.130449 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:24:53.130467 kernel: BIOS-provided physical RAM map: Dec 13 14:24:53.130481 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 14:24:53.130493 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 14:24:53.137412 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 14:24:53.137446 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 14:24:53.137461 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 14:24:53.137475 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd276fff] usable Dec 13 14:24:53.137489 kernel: BIOS-e820: [mem 0x00000000bd277000-0x00000000bd280fff] ACPI data Dec 13 14:24:53.137503 kernel: BIOS-e820: [mem 0x00000000bd281000-0x00000000bf8ecfff] usable Dec 13 14:24:53.137517 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 13 14:24:53.137530 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 14:24:53.137544 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 14:24:53.137564 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 14:24:53.137579 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 14:24:53.137594 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 14:24:53.137608 kernel: NX (Execute Disable) protection: active Dec 13 14:24:53.137623 kernel: efi: EFI v2.70 by EDK II Dec 13 14:24:53.137639 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd277018 Dec 13 14:24:53.137654 kernel: random: crng init done Dec 13 14:24:53.137668 kernel: SMBIOS 2.4 present. Dec 13 14:24:53.137687 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 14:24:53.137701 kernel: Hypervisor detected: KVM Dec 13 14:24:53.137716 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:24:53.137731 kernel: kvm-clock: cpu 0, msr 1e819a001, primary cpu clock Dec 13 14:24:53.137745 kernel: kvm-clock: using sched offset of 13232492407 cycles Dec 13 14:24:53.137761 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:24:53.137777 kernel: tsc: Detected 2299.998 MHz processor Dec 13 14:24:53.137791 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:24:53.137808 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:24:53.137823 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 14:24:53.137841 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:24:53.137856 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 14:24:53.137870 kernel: Using GB pages for direct mapping Dec 13 14:24:53.137885 kernel: Secure boot disabled Dec 13 14:24:53.137900 kernel: ACPI: Early table checksum verification disabled Dec 13 14:24:53.137916 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 14:24:53.137931 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 14:24:53.137946 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 14:24:53.137971 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 14:24:53.137986 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 14:24:53.138002 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 14:24:53.138018 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 14:24:53.138034 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 14:24:53.138051 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 14:24:53.138070 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 14:24:53.138086 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 14:24:53.138102 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 14:24:53.138118 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 14:24:53.138134 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 14:24:53.138150 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 14:24:53.138166 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 14:24:53.138182 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 14:24:53.138199 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 14:24:53.138218 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 14:24:53.138234 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 14:24:53.138250 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:24:53.138266 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:24:53.138282 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 14:24:53.138298 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 14:24:53.138320 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 14:24:53.138335 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 14:24:53.138349 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 14:24:53.138382 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Dec 13 14:24:53.138398 kernel: Zone ranges: Dec 13 14:24:53.138414 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:24:53.138431 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 14:24:53.138447 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 14:24:53.138462 kernel: Movable zone start for each node Dec 13 14:24:53.138479 kernel: Early memory node ranges Dec 13 14:24:53.138495 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 14:24:53.138511 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 14:24:53.138531 kernel: node 0: [mem 0x0000000000100000-0x00000000bd276fff] Dec 13 14:24:53.138546 kernel: node 0: [mem 0x00000000bd281000-0x00000000bf8ecfff] Dec 13 14:24:53.138562 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 14:24:53.138578 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 14:24:53.138595 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 14:24:53.138611 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:24:53.138626 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 14:24:53.138643 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 14:24:53.138659 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Dec 13 14:24:53.138679 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 14:24:53.138696 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 14:24:53.138711 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 14:24:53.138727 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:24:53.138743 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:24:53.138760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:24:53.138776 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:24:53.138792 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:24:53.138808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:24:53.138828 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:24:53.138844 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:24:53.138860 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 14:24:53.138876 kernel: Booting paravirtualized kernel on KVM Dec 13 14:24:53.138892 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:24:53.138908 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:24:53.138924 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:24:53.138940 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:24:53.138956 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:24:53.138976 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:24:53.138991 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:24:53.139007 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Dec 13 14:24:53.139023 kernel: Policy zone: Normal Dec 13 14:24:53.139042 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:24:53.139059 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:24:53.139074 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 14:24:53.139090 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:24:53.139106 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:24:53.139126 kernel: Memory: 7515408K/7860544K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 344876K reserved, 0K cma-reserved) Dec 13 14:24:53.139143 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:24:53.139159 kernel: Kernel/User page tables isolation: enabled Dec 13 14:24:53.139174 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:24:53.139190 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:24:53.139206 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:24:53.139224 kernel: rcu: RCU event tracing is enabled. Dec 13 14:24:53.139240 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:24:53.139259 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:24:53.139288 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:24:53.139313 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:24:53.139334 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:24:53.139361 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 14:24:53.139378 kernel: Console: colour dummy device 80x25 Dec 13 14:24:53.139395 kernel: printk: console [ttyS0] enabled Dec 13 14:24:53.139412 kernel: ACPI: Core revision 20210730 Dec 13 14:24:53.139429 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:24:53.139446 kernel: x2apic enabled Dec 13 14:24:53.139466 kernel: Switched APIC routing to physical x2apic. Dec 13 14:24:53.139483 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 14:24:53.139500 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 14:24:53.139518 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 14:24:53.139534 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 14:24:53.139552 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 14:24:53.139568 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:24:53.139589 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 14:24:53.139606 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 14:24:53.139623 kernel: Spectre V2 : Mitigation: IBRS Dec 13 14:24:53.139640 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:24:53.139657 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:24:53.139674 kernel: RETBleed: Mitigation: IBRS Dec 13 14:24:53.139691 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:24:53.139708 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Dec 13 14:24:53.139725 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 14:24:53.139745 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 14:24:53.139762 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:24:53.139779 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:24:53.139795 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:24:53.139810 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:24:53.139824 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:24:53.139839 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 14:24:53.139853 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:24:53.139867 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:24:53.139885 kernel: LSM: Security Framework initializing Dec 13 14:24:53.139900 kernel: SELinux: Initializing. Dec 13 14:24:53.139914 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:24:53.139929 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:24:53.139945 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 14:24:53.139963 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 14:24:53.139979 kernel: signal: max sigframe size: 1776 Dec 13 14:24:53.139996 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:24:53.140012 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:24:53.140034 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:24:53.140049 kernel: x86: Booting SMP configuration: Dec 13 14:24:53.140064 kernel: .... node #0, CPUs: #1 Dec 13 14:24:53.140080 kernel: kvm-clock: cpu 1, msr 1e819a041, secondary cpu clock Dec 13 14:24:53.140098 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 14:24:53.140116 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:24:53.140132 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:24:53.140168 kernel: smpboot: Max logical packages: 1 Dec 13 14:24:53.140190 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 14:24:53.140207 kernel: devtmpfs: initialized Dec 13 14:24:53.140224 kernel: x86/mm: Memory block size: 128MB Dec 13 14:24:53.140241 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 14:24:53.140259 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:24:53.140276 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:24:53.140293 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:24:53.140329 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:24:53.140347 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:24:53.140384 kernel: audit: type=2000 audit(1734099891.557:1): state=initialized audit_enabled=0 res=1 Dec 13 14:24:53.140402 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:24:53.140419 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:24:53.140436 kernel: cpuidle: using governor menu Dec 13 14:24:53.140453 kernel: ACPI: bus type PCI registered Dec 13 14:24:53.140470 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:24:53.140487 kernel: dca service started, version 1.12.1 Dec 13 14:24:53.140504 kernel: PCI: Using configuration type 1 for base access Dec 13 14:24:53.140521 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:24:53.140541 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:24:53.140559 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:24:53.140576 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:24:53.140593 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:24:53.140610 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:24:53.140626 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:24:53.140643 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:24:53.140661 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:24:53.140678 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:24:53.140698 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 14:24:53.140715 kernel: ACPI: Interpreter enabled Dec 13 14:24:53.140732 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 14:24:53.140749 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:24:53.140766 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:24:53.140784 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 14:24:53.140801 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:24:53.141051 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:24:53.141223 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 14:24:53.141247 kernel: PCI host bridge to bus 0000:00 Dec 13 14:24:53.141456 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:24:53.141609 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:24:53.141755 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:24:53.141917 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 14:24:53.142062 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:24:53.142247 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 14:24:53.142457 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 14:24:53.142782 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 14:24:53.142982 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 14:24:53.143157 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 14:24:53.143337 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 14:24:53.143545 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 14:24:53.143727 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:24:53.143893 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 14:24:53.144054 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 14:24:53.144233 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:24:53.144429 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 14:24:53.144590 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 14:24:53.144617 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:24:53.144635 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:24:53.144653 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:24:53.144670 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:24:53.144688 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 14:24:53.144705 kernel: iommu: Default domain type: Translated Dec 13 14:24:53.144722 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:24:53.144739 kernel: vgaarb: loaded Dec 13 14:24:53.144757 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:24:53.144778 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:24:53.144795 kernel: PTP clock support registered Dec 13 14:24:53.144812 kernel: Registered efivars operations Dec 13 14:24:53.144829 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:24:53.144846 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:24:53.144862 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 14:24:53.144880 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 14:24:53.144897 kernel: e820: reserve RAM buffer [mem 0xbd277000-0xbfffffff] Dec 13 14:24:53.144914 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 14:24:53.144934 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 14:24:53.144951 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:24:53.144968 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:24:53.144986 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:24:53.145003 kernel: pnp: PnP ACPI init Dec 13 14:24:53.145020 kernel: pnp: PnP ACPI: found 7 devices Dec 13 14:24:53.145038 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:24:53.145056 kernel: NET: Registered PF_INET protocol family Dec 13 14:24:53.145073 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:24:53.145093 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 14:24:53.145110 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:24:53.145128 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:24:53.145145 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 14:24:53.145163 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 14:24:53.145180 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:24:53.145198 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:24:53.145215 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:24:53.145236 kernel: NET: Registered PF_XDP protocol family Dec 13 14:24:53.145444 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:24:53.145594 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:24:53.145739 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:24:53.145880 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 14:24:53.146054 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 14:24:53.146080 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:24:53.146104 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 14:24:53.146122 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 14:24:53.146140 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:24:53.146157 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 14:24:53.146175 kernel: clocksource: Switched to clocksource tsc Dec 13 14:24:53.146192 kernel: Initialise system trusted keyrings Dec 13 14:24:53.146210 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 14:24:53.146228 kernel: Key type asymmetric registered Dec 13 14:24:53.146245 kernel: Asymmetric key parser 'x509' registered Dec 13 14:24:53.146265 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:24:53.146283 kernel: io scheduler mq-deadline registered Dec 13 14:24:53.146299 kernel: io scheduler kyber registered Dec 13 14:24:53.146316 kernel: io scheduler bfq registered Dec 13 14:24:53.146333 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:24:53.146382 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 14:24:53.146556 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 14:24:53.146579 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 14:24:53.146738 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 14:24:53.146764 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 14:24:53.146927 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 14:24:53.146949 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:24:53.146966 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:24:53.146983 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 14:24:53.147000 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 14:24:53.147016 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 14:24:53.147189 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 14:24:53.147217 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:24:53.147234 kernel: i8042: Warning: Keylock active Dec 13 14:24:53.147250 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:24:53.147267 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:24:53.147452 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 14:24:53.147604 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 14:24:53.147750 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T14:24:52 UTC (1734099892) Dec 13 14:24:53.147896 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 14:24:53.147921 kernel: intel_pstate: CPU model not supported Dec 13 14:24:53.147938 kernel: pstore: Registered efi as persistent store backend Dec 13 14:24:53.147955 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:24:53.147971 kernel: Segment Routing with IPv6 Dec 13 14:24:53.147988 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:24:53.148004 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:24:53.148022 kernel: Key type dns_resolver registered Dec 13 14:24:53.148039 kernel: IPI shorthand broadcast: enabled Dec 13 14:24:53.148063 kernel: sched_clock: Marking stable (812076892, 187585638)->(1061638022, -61975492) Dec 13 14:24:53.148084 kernel: registered taskstats version 1 Dec 13 14:24:53.148102 kernel: Loading compiled-in X.509 certificates Dec 13 14:24:53.148118 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:24:53.148135 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:24:53.148151 kernel: Key type .fscrypt registered Dec 13 14:24:53.148166 kernel: Key type fscrypt-provisioning registered Dec 13 14:24:53.148183 kernel: pstore: Using crash dump compression: deflate Dec 13 14:24:53.148199 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:24:53.148216 kernel: ima: No architecture policies found Dec 13 14:24:53.148237 kernel: clk: Disabling unused clocks Dec 13 14:24:53.148255 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:24:53.148272 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:24:53.148289 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:24:53.148306 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:24:53.148323 kernel: Run /init as init process Dec 13 14:24:53.148340 kernel: with arguments: Dec 13 14:24:53.153500 kernel: /init Dec 13 14:24:53.153527 kernel: with environment: Dec 13 14:24:53.153555 kernel: HOME=/ Dec 13 14:24:53.153573 kernel: TERM=linux Dec 13 14:24:53.153588 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:24:53.153609 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:24:53.153631 systemd[1]: Detected virtualization kvm. Dec 13 14:24:53.153649 systemd[1]: Detected architecture x86-64. Dec 13 14:24:53.153667 systemd[1]: Running in initrd. Dec 13 14:24:53.153689 systemd[1]: No hostname configured, using default hostname. Dec 13 14:24:53.153704 systemd[1]: Hostname set to . Dec 13 14:24:53.153722 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:24:53.153739 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:24:53.153756 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:24:53.153773 systemd[1]: Reached target cryptsetup.target. Dec 13 14:24:53.153790 systemd[1]: Reached target paths.target. Dec 13 14:24:53.153808 systemd[1]: Reached target slices.target. Dec 13 14:24:53.153828 systemd[1]: Reached target swap.target. Dec 13 14:24:53.153845 systemd[1]: Reached target timers.target. Dec 13 14:24:53.153863 systemd[1]: Listening on iscsid.socket. Dec 13 14:24:53.153880 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:24:53.153898 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:24:53.153916 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:24:53.153935 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:24:53.153953 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:24:53.153974 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:24:53.153992 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:24:53.154028 systemd[1]: Reached target sockets.target. Dec 13 14:24:53.154049 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:24:53.154068 systemd[1]: Finished network-cleanup.service. Dec 13 14:24:53.154085 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:24:53.154106 systemd[1]: Starting systemd-journald.service... Dec 13 14:24:53.154125 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:24:53.154142 systemd[1]: Starting systemd-resolved.service... Dec 13 14:24:53.154160 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:24:53.154179 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:24:53.154199 kernel: audit: type=1130 audit(1734099893.135:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.154217 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:24:53.154244 kernel: audit: type=1130 audit(1734099893.144:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.154264 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:24:53.154292 systemd-journald[190]: Journal started Dec 13 14:24:53.161754 systemd-journald[190]: Runtime Journal (/run/log/journal/9f3cf6a40ecfbe6d1f7fee88e0caa704) is 8.0M, max 148.8M, 140.8M free. Dec 13 14:24:53.161852 kernel: audit: type=1130 audit(1734099893.154:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.180549 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:24:53.180631 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:24:53.180657 systemd[1]: Started systemd-journald.service. Dec 13 14:24:53.164494 systemd-modules-load[191]: Inserted module 'overlay' Dec 13 14:24:53.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.192377 kernel: audit: type=1130 audit(1734099893.174:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.195965 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:24:53.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.201379 kernel: audit: type=1130 audit(1734099893.194:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.214769 systemd-resolved[192]: Positive Trust Anchors: Dec 13 14:24:53.215332 systemd-resolved[192]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:24:53.215770 systemd-resolved[192]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:24:53.228754 systemd-resolved[192]: Defaulting to hostname 'linux'. Dec 13 14:24:53.230697 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:24:53.239248 kernel: audit: type=1130 audit(1734099893.229:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.239293 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:24:53.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.232311 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:24:53.258097 kernel: audit: type=1130 audit(1734099893.245:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.258144 kernel: Bridge firewalling registered Dec 13 14:24:53.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.258262 dracut-cmdline[206]: dracut-dracut-053 Dec 13 14:24:53.258262 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Dec 13 14:24:53.258262 dracut-cmdline[206]: BEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:24:53.288499 kernel: SCSI subsystem initialized Dec 13 14:24:53.242610 systemd[1]: Started systemd-resolved.service. Dec 13 14:24:53.246603 systemd[1]: Reached target nss-lookup.target. Dec 13 14:24:53.255479 systemd-modules-load[191]: Inserted module 'br_netfilter' Dec 13 14:24:53.309307 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:24:53.309394 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:24:53.312384 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:24:53.316793 systemd-modules-load[191]: Inserted module 'dm_multipath' Dec 13 14:24:53.317979 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:24:53.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.331735 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:24:53.341539 kernel: audit: type=1130 audit(1734099893.329:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.346017 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:24:53.356552 kernel: audit: type=1130 audit(1734099893.348:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.368388 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:24:53.389407 kernel: iscsi: registered transport (tcp) Dec 13 14:24:53.416535 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:24:53.416650 kernel: QLogic iSCSI HBA Driver Dec 13 14:24:53.463381 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:24:53.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.465022 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:24:53.523415 kernel: raid6: avx2x4 gen() 18388 MB/s Dec 13 14:24:53.541420 kernel: raid6: avx2x4 xor() 7990 MB/s Dec 13 14:24:53.559431 kernel: raid6: avx2x2 gen() 18052 MB/s Dec 13 14:24:53.576399 kernel: raid6: avx2x2 xor() 18422 MB/s Dec 13 14:24:53.593401 kernel: raid6: avx2x1 gen() 14283 MB/s Dec 13 14:24:53.611392 kernel: raid6: avx2x1 xor() 16054 MB/s Dec 13 14:24:53.629393 kernel: raid6: sse2x4 gen() 10979 MB/s Dec 13 14:24:53.647404 kernel: raid6: sse2x4 xor() 6621 MB/s Dec 13 14:24:53.665397 kernel: raid6: sse2x2 gen() 11916 MB/s Dec 13 14:24:53.683396 kernel: raid6: sse2x2 xor() 7371 MB/s Dec 13 14:24:53.700398 kernel: raid6: sse2x1 gen() 10483 MB/s Dec 13 14:24:53.719489 kernel: raid6: sse2x1 xor() 5167 MB/s Dec 13 14:24:53.719537 kernel: raid6: using algorithm avx2x4 gen() 18388 MB/s Dec 13 14:24:53.719559 kernel: raid6: .... xor() 7990 MB/s, rmw enabled Dec 13 14:24:53.719580 kernel: raid6: using avx2x2 recovery algorithm Dec 13 14:24:53.741415 kernel: xor: automatically using best checksumming function avx Dec 13 14:24:53.852395 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:24:53.864252 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:24:53.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.871000 audit: BPF prog-id=7 op=LOAD Dec 13 14:24:53.871000 audit: BPF prog-id=8 op=LOAD Dec 13 14:24:53.873863 systemd[1]: Starting systemd-udevd.service... Dec 13 14:24:53.890416 systemd-udevd[388]: Using default interface naming scheme 'v252'. Dec 13 14:24:53.897410 systemd[1]: Started systemd-udevd.service. Dec 13 14:24:53.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.919732 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:24:53.935620 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Dec 13 14:24:53.973587 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:24:53.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:53.974857 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:24:54.043102 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:24:54.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:54.130542 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:24:54.139379 kernel: scsi host0: Virtio SCSI HBA Dec 13 14:24:54.156103 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 14:24:54.268779 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:24:54.268863 kernel: AES CTR mode by8 optimization enabled Dec 13 14:24:54.291021 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 14:24:54.353079 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 14:24:54.353383 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 14:24:54.353594 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 14:24:54.353792 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 14:24:54.353999 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:24:54.354025 kernel: GPT:17805311 != 25165823 Dec 13 14:24:54.354048 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:24:54.354070 kernel: GPT:17805311 != 25165823 Dec 13 14:24:54.354091 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:24:54.354113 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:24:54.354136 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 14:24:54.417396 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (432) Dec 13 14:24:54.435373 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:24:54.445555 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:24:54.471561 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:24:54.491633 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:24:54.509562 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:24:54.523683 systemd[1]: Starting disk-uuid.service... Dec 13 14:24:54.545742 disk-uuid[513]: Primary Header is updated. Dec 13 14:24:54.545742 disk-uuid[513]: Secondary Entries is updated. Dec 13 14:24:54.545742 disk-uuid[513]: Secondary Header is updated. Dec 13 14:24:54.570510 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:24:54.583399 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:24:54.608385 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:24:55.601406 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:24:55.602059 disk-uuid[514]: The operation has completed successfully. Dec 13 14:24:55.673626 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:24:55.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:55.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:55.673762 systemd[1]: Finished disk-uuid.service. Dec 13 14:24:55.691909 systemd[1]: Starting verity-setup.service... Dec 13 14:24:55.721380 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:24:55.807029 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:24:55.815837 systemd[1]: Finished verity-setup.service. Dec 13 14:24:55.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:55.831782 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:24:55.932424 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:24:55.932945 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:24:55.933344 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:24:55.981538 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:24:55.981582 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:24:55.981604 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:24:55.934307 systemd[1]: Starting ignition-setup.service... Dec 13 14:24:56.000535 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:24:55.946867 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:24:56.012915 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:24:56.032031 systemd[1]: Finished ignition-setup.service. Dec 13 14:24:56.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:56.033898 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:24:56.066096 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:24:56.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:56.075000 audit: BPF prog-id=9 op=LOAD Dec 13 14:24:56.077966 systemd[1]: Starting systemd-networkd.service... Dec 13 14:24:56.111855 systemd-networkd[688]: lo: Link UP Dec 13 14:24:56.111869 systemd-networkd[688]: lo: Gained carrier Dec 13 14:24:56.112733 systemd-networkd[688]: Enumeration completed Dec 13 14:24:56.113156 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:24:56.113379 systemd[1]: Started systemd-networkd.service. Dec 13 14:24:56.115600 systemd-networkd[688]: eth0: Link UP Dec 13 14:24:56.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:56.115608 systemd-networkd[688]: eth0: Gained carrier Dec 13 14:24:56.125506 systemd-networkd[688]: eth0: DHCPv4 address 10.128.0.74/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 14:24:56.158819 systemd[1]: Reached target network.target. Dec 13 14:24:56.174691 systemd[1]: Starting iscsiuio.service... Dec 13 14:24:56.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:56.200820 systemd[1]: Started iscsiuio.service. Dec 13 14:24:56.228712 iscsid[698]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:24:56.228712 iscsid[698]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 14:24:56.228712 iscsid[698]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:24:56.228712 iscsid[698]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:24:56.228712 iscsid[698]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:24:56.228712 iscsid[698]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:24:56.228712 iscsid[698]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:24:56.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:56.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:56.216682 systemd[1]: Starting iscsid.service... Dec 13 14:24:56.330844 ignition[662]: Ignition 2.14.0 Dec 13 14:24:56.236752 systemd[1]: Started iscsid.service. Dec 13 14:24:56.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:56.330868 ignition[662]: Stage: fetch-offline Dec 13 14:24:56.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:56.238110 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:24:56.331482 ignition[662]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:24:56.257597 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:24:56.331528 ignition[662]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:24:56.285937 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:24:56.353574 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:24:56.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:56.319725 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:24:56.353780 ignition[662]: parsed url from cmdline: "" Dec 13 14:24:56.344529 systemd[1]: Reached target remote-fs.target. Dec 13 14:24:56.353787 ignition[662]: no config URL provided Dec 13 14:24:56.364648 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:24:56.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:56.353795 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:24:56.388972 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:24:56.353807 ignition[662]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:24:56.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:56.402940 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:24:56.353816 ignition[662]: failed to fetch config: resource requires networking Dec 13 14:24:56.419075 systemd[1]: Starting ignition-fetch.service... Dec 13 14:24:56.354109 ignition[662]: Ignition finished successfully Dec 13 14:24:56.453110 unknown[713]: fetched base config from "system" Dec 13 14:24:56.430264 ignition[713]: Ignition 2.14.0 Dec 13 14:24:56.453132 unknown[713]: fetched base config from "system" Dec 13 14:24:56.430273 ignition[713]: Stage: fetch Dec 13 14:24:56.453143 unknown[713]: fetched user config from "gcp" Dec 13 14:24:56.430449 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:24:56.460031 systemd[1]: Finished ignition-fetch.service. Dec 13 14:24:56.430491 ignition[713]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:24:56.477136 systemd[1]: Starting ignition-kargs.service... Dec 13 14:24:56.438572 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:24:56.509011 systemd[1]: Finished ignition-kargs.service. Dec 13 14:24:56.438771 ignition[713]: parsed url from cmdline: "" Dec 13 14:24:56.516897 systemd[1]: Starting ignition-disks.service... Dec 13 14:24:56.438777 ignition[713]: no config URL provided Dec 13 14:24:56.546850 systemd[1]: Finished ignition-disks.service. Dec 13 14:24:56.438784 ignition[713]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:24:56.562763 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:24:56.438796 ignition[713]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:24:56.578564 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:24:56.438833 ignition[713]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 14:24:56.592542 systemd[1]: Reached target local-fs.target. Dec 13 14:24:56.446907 ignition[713]: GET result: OK Dec 13 14:24:56.607523 systemd[1]: Reached target sysinit.target. Dec 13 14:24:56.447013 ignition[713]: parsing config with SHA512: 3639c95b44c297530ad4033c8241105e06bf30be0f8467301a2b6150eb8ef67fd1fc15a69cee9f732f2d3e139ed88fa97016cd59c975b8d89bf1bba5a78812bc Dec 13 14:24:56.607636 systemd[1]: Reached target basic.target. Dec 13 14:24:56.454124 ignition[713]: fetch: fetch complete Dec 13 14:24:56.628785 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:24:56.454131 ignition[713]: fetch: fetch passed Dec 13 14:24:56.454181 ignition[713]: Ignition finished successfully Dec 13 14:24:56.489801 ignition[719]: Ignition 2.14.0 Dec 13 14:24:56.489811 ignition[719]: Stage: kargs Dec 13 14:24:56.489955 ignition[719]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:24:56.490000 ignition[719]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:24:56.498185 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:24:56.499966 ignition[719]: kargs: kargs passed Dec 13 14:24:56.500027 ignition[719]: Ignition finished successfully Dec 13 14:24:56.528314 ignition[725]: Ignition 2.14.0 Dec 13 14:24:56.528326 ignition[725]: Stage: disks Dec 13 14:24:56.528508 ignition[725]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:24:56.528541 ignition[725]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:24:56.536344 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:24:56.537799 ignition[725]: disks: disks passed Dec 13 14:24:56.537854 ignition[725]: Ignition finished successfully Dec 13 14:24:56.674807 systemd-fsck[733]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 14:24:56.868464 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:24:56.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:56.877757 systemd[1]: Mounting sysroot.mount... Dec 13 14:24:56.906397 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:24:56.906795 systemd[1]: Mounted sysroot.mount. Dec 13 14:24:56.907149 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:24:56.931280 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:24:56.946051 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:24:56.946140 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:24:56.946189 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:24:56.967124 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:24:57.009440 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (739) Dec 13 14:24:56.987164 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:24:57.036521 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:24:57.036554 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:24:57.036578 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:24:57.044745 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:24:57.051399 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:24:57.053927 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:24:57.086516 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:24:57.066595 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:24:57.104583 initrd-setup-root[778]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:24:57.114491 initrd-setup-root[786]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:24:57.157748 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:24:57.197545 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 14:24:57.197586 kernel: audit: type=1130 audit(1734099897.156:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:57.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:57.159305 systemd[1]: Starting ignition-mount.service... Dec 13 14:24:57.205666 systemd[1]: Starting sysroot-boot.service... Dec 13 14:24:57.219862 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:24:57.220030 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:24:57.245518 ignition[805]: INFO : Ignition 2.14.0 Dec 13 14:24:57.245518 ignition[805]: INFO : Stage: mount Dec 13 14:24:57.245518 ignition[805]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:24:57.245518 ignition[805]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:24:57.343539 kernel: audit: type=1130 audit(1734099897.266:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:57.343599 kernel: audit: type=1130 audit(1734099897.297:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:57.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:57.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:57.250918 systemd[1]: Finished sysroot-boot.service. Dec 13 14:24:57.357545 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:24:57.357545 ignition[805]: INFO : mount: mount passed Dec 13 14:24:57.357545 ignition[805]: INFO : Ignition finished successfully Dec 13 14:24:57.417517 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (815) Dec 13 14:24:57.417573 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:24:57.417589 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:24:57.417603 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:24:57.417618 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:24:57.267942 systemd[1]: Finished ignition-mount.service. Dec 13 14:24:57.300212 systemd[1]: Starting ignition-files.service... Dec 13 14:24:57.354644 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:24:57.441909 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:24:57.471605 ignition[834]: INFO : Ignition 2.14.0 Dec 13 14:24:57.471605 ignition[834]: INFO : Stage: files Dec 13 14:24:57.486484 ignition[834]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:24:57.486484 ignition[834]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:24:57.486484 ignition[834]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:24:57.486484 ignition[834]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:24:57.539517 ignition[834]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:24:57.539517 ignition[834]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:24:57.539517 ignition[834]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:24:57.539517 ignition[834]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:24:57.539517 ignition[834]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:24:57.539517 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:24:57.539517 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:24:57.539517 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:24:57.539517 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:24:57.495444 unknown[834]: wrote ssh authorized keys file for user: core Dec 13 14:24:57.675522 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 14:24:57.780810 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:24:57.808509 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (834) Dec 13 14:24:57.808550 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/hosts" Dec 13 14:24:57.808550 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:24:57.808550 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(6): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1889184897" Dec 13 14:24:57.808550 ignition[834]: CRITICAL : files: createFilesystemsFiles: createFiles: op(5): op(6): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1889184897": device or resource busy Dec 13 14:24:57.808550 ignition[834]: ERROR : files: createFilesystemsFiles: createFiles: op(5): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1889184897", trying btrfs: device or resource busy Dec 13 14:24:57.808550 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1889184897" Dec 13 14:24:57.808550 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1889184897" Dec 13 14:24:57.808550 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [started] unmounting "/mnt/oem1889184897" Dec 13 14:24:57.808550 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [finished] unmounting "/mnt/oem1889184897" Dec 13 14:24:57.808550 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/hosts" Dec 13 14:24:57.808550 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:24:57.808550 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:24:57.808550 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 14:24:57.808550 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:24:57.808550 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3300973853" Dec 13 14:24:58.051545 ignition[834]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3300973853": device or resource busy Dec 13 14:24:58.051545 ignition[834]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3300973853", trying btrfs: device or resource busy Dec 13 14:24:58.051545 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3300973853" Dec 13 14:24:58.051545 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3300973853" Dec 13 14:24:58.051545 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem3300973853" Dec 13 14:24:58.051545 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem3300973853" Dec 13 14:24:58.051545 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 14:24:58.051545 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:24:58.051545 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:24:58.051545 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:24:58.051545 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:24:58.051545 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:24:58.051545 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:24:58.051545 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:24:58.051545 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:24:58.294499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:24:58.294499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:24:58.294499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 14:24:58.294499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:24:58.294499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2691469439" Dec 13 14:24:58.294499 ignition[834]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2691469439": device or resource busy Dec 13 14:24:58.294499 ignition[834]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2691469439", trying btrfs: device or resource busy Dec 13 14:24:58.294499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2691469439" Dec 13 14:24:58.294499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2691469439" Dec 13 14:24:58.294499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem2691469439" Dec 13 14:24:58.294499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem2691469439" Dec 13 14:24:58.294499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 14:24:58.294499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:24:58.072508 systemd-networkd[688]: eth0: Gained IPv6LL Dec 13 14:24:58.541568 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:24:58.541568 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK Dec 13 14:24:58.689494 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:24:58.708508 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 14:24:58.708508 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:24:58.708508 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3537006630" Dec 13 14:24:58.708508 ignition[834]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3537006630": device or resource busy Dec 13 14:24:58.708508 ignition[834]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3537006630", trying btrfs: device or resource busy Dec 13 14:24:58.708508 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3537006630" Dec 13 14:24:58.708508 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3537006630" Dec 13 14:24:58.708508 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem3537006630" Dec 13 14:24:58.708508 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem3537006630" Dec 13 14:24:58.708508 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 14:24:58.708508 ignition[834]: INFO : files: op(1c): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:24:58.708508 ignition[834]: INFO : files: op(1c): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:24:58.708508 ignition[834]: INFO : files: op(1d): [started] processing unit "oem-gce.service" Dec 13 14:24:58.708508 ignition[834]: INFO : files: op(1d): [finished] processing unit "oem-gce.service" Dec 13 14:24:58.708508 ignition[834]: INFO : files: op(1e): [started] processing unit "oem-gce-enable-oslogin.service" Dec 13 14:24:58.708508 ignition[834]: INFO : files: op(1e): [finished] processing unit "oem-gce-enable-oslogin.service" Dec 13 14:24:58.708508 ignition[834]: INFO : files: op(1f): [started] processing unit "containerd.service" Dec 13 14:24:59.184559 kernel: audit: type=1130 audit(1734099898.749:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.184625 kernel: audit: type=1130 audit(1734099898.860:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.184653 kernel: audit: type=1130 audit(1734099898.919:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.184675 kernel: audit: type=1131 audit(1734099898.919:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.184699 kernel: audit: type=1130 audit(1734099899.041:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.184714 kernel: audit: type=1131 audit(1734099899.063:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.184728 kernel: audit: type=1130 audit(1734099899.146:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:58.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:58.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:58.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:58.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.185008 ignition[834]: INFO : files: op(1f): op(20): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:24:59.185008 ignition[834]: INFO : files: op(1f): op(20): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:24:59.185008 ignition[834]: INFO : files: op(1f): [finished] processing unit "containerd.service" Dec 13 14:24:59.185008 ignition[834]: INFO : files: op(21): [started] processing unit "prepare-helm.service" Dec 13 14:24:59.185008 ignition[834]: INFO : files: op(21): op(22): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:24:59.185008 ignition[834]: INFO : files: op(21): op(22): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:24:59.185008 ignition[834]: INFO : files: op(21): [finished] processing unit "prepare-helm.service" Dec 13 14:24:59.185008 ignition[834]: INFO : files: op(23): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:24:59.185008 ignition[834]: INFO : files: op(23): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:24:59.185008 ignition[834]: INFO : files: op(24): [started] setting preset to enabled for "oem-gce.service" Dec 13 14:24:59.185008 ignition[834]: INFO : files: op(24): [finished] setting preset to enabled for "oem-gce.service" Dec 13 14:24:59.185008 ignition[834]: INFO : files: op(25): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 14:24:59.185008 ignition[834]: INFO : files: op(25): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 14:24:59.185008 ignition[834]: INFO : files: op(26): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:24:59.185008 ignition[834]: INFO : files: op(26): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:24:59.185008 ignition[834]: INFO : files: createResultFile: createFiles: op(27): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:24:59.185008 ignition[834]: INFO : files: createResultFile: createFiles: op(27): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:24:59.185008 ignition[834]: INFO : files: files passed Dec 13 14:24:59.185008 ignition[834]: INFO : Ignition finished successfully Dec 13 14:24:59.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:58.713282 systemd[1]: mnt-oem3537006630.mount: Deactivated successfully. Dec 13 14:24:58.728018 systemd[1]: Finished ignition-files.service. Dec 13 14:24:59.558540 initrd-setup-root-after-ignition[857]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:24:59.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:58.761131 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:24:58.787758 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:24:59.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:58.788930 systemd[1]: Starting ignition-quench.service... Dec 13 14:24:59.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:58.813119 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:24:59.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:58.861972 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:24:58.862143 systemd[1]: Finished ignition-quench.service. Dec 13 14:24:58.920901 systemd[1]: Reached target ignition-complete.target. Dec 13 14:24:59.691545 ignition[872]: INFO : Ignition 2.14.0 Dec 13 14:24:59.691545 ignition[872]: INFO : Stage: umount Dec 13 14:24:59.691545 ignition[872]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:24:59.691545 ignition[872]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:24:59.691545 ignition[872]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:24:59.691545 ignition[872]: INFO : umount: umount passed Dec 13 14:24:59.691545 ignition[872]: INFO : Ignition finished successfully Dec 13 14:24:59.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:58.985652 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:24:59.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.823634 iscsid[698]: iscsid shutting down. Dec 13 14:24:59.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.022953 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:24:59.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.023084 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:24:59.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.064410 systemd[1]: Reached target initrd-fs.target. Dec 13 14:24:59.094771 systemd[1]: Reached target initrd.target. Dec 13 14:24:59.111793 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:24:59.113153 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:24:59.129986 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:24:59.149256 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:24:59.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.200222 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:24:59.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.218872 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:24:59.238990 systemd[1]: Stopped target timers.target. Dec 13 14:24:59.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.273868 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:25:00.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:00.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.274066 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:24:59.291094 systemd[1]: Stopped target initrd.target. Dec 13 14:24:59.329835 systemd[1]: Stopped target basic.target. Dec 13 14:24:59.343890 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:24:59.363900 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:25:00.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.400820 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:25:00.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:00.101000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:24:59.434855 systemd[1]: Stopped target remote-fs.target. Dec 13 14:24:59.447966 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:24:59.465970 systemd[1]: Stopped target sysinit.target. Dec 13 14:25:00.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.486905 systemd[1]: Stopped target local-fs.target. Dec 13 14:25:00.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.520882 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:25:00.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.536873 systemd[1]: Stopped target swap.target. Dec 13 14:24:59.551767 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:25:00.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.551995 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:24:59.566960 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:24:59.588804 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:25:00.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.589002 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:25:00.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.607097 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:25:00.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.607297 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:24:59.633910 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:25:00.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.634101 systemd[1]: Stopped ignition-files.service. Dec 13 14:25:00.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.650444 systemd[1]: Stopping ignition-mount.service... Dec 13 14:25:00.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.682873 systemd[1]: Stopping iscsid.service... Dec 13 14:25:00.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.699938 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:25:00.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:00.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:59.712742 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:24:59.713180 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:25:00.410000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:25:00.410000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:25:00.410000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:25:00.412000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:25:00.412000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:24:59.720968 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:25:00.438415 systemd-journald[190]: Failed to send stream file descriptor to service manager: Connection refused Dec 13 14:24:59.721158 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:24:59.742958 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:25:00.460585 systemd-journald[190]: Received SIGTERM from PID 1 (n/a). Dec 13 14:24:59.744024 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:24:59.744275 systemd[1]: Stopped iscsid.service. Dec 13 14:24:59.764539 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:24:59.764651 systemd[1]: Stopped ignition-mount.service. Dec 13 14:24:59.780317 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:24:59.780450 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:24:59.794399 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:24:59.794550 systemd[1]: Stopped ignition-disks.service. Dec 13 14:24:59.815660 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:24:59.815753 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:24:59.831691 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:24:59.831787 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:24:59.846633 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:24:59.846721 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:24:59.862607 systemd[1]: Stopped target paths.target. Dec 13 14:24:59.876507 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:24:59.880520 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:24:59.892534 systemd[1]: Stopped target slices.target. Dec 13 14:24:59.905548 systemd[1]: Stopped target sockets.target. Dec 13 14:24:59.922589 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:24:59.922671 systemd[1]: Closed iscsid.socket. Dec 13 14:24:59.938595 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:24:59.938692 systemd[1]: Stopped ignition-setup.service. Dec 13 14:24:59.954653 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:24:59.954746 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:24:59.970472 systemd[1]: Stopping iscsiuio.service... Dec 13 14:24:59.984114 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:24:59.984236 systemd[1]: Stopped iscsiuio.service. Dec 13 14:24:59.992082 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:24:59.992194 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:25:00.013686 systemd[1]: Stopped target network.target. Dec 13 14:25:00.028602 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:25:00.028686 systemd[1]: Closed iscsiuio.socket. Dec 13 14:25:00.042859 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:25:00.045575 systemd-networkd[688]: eth0: DHCPv6 lease lost Dec 13 14:25:00.049894 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:25:00.070922 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:25:00.071054 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:25:00.087291 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:25:00.087454 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:25:00.103268 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:25:00.103313 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:25:00.120584 systemd[1]: Stopping network-cleanup.service... Dec 13 14:25:00.126690 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:25:00.126779 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:25:00.139897 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:25:00.139972 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:25:00.161868 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:25:00.161940 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:25:00.176868 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:25:00.192336 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:25:00.193076 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:25:00.193231 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:25:00.199183 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:25:00.199270 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:25:00.220650 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:25:00.220719 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:25:00.235707 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:25:00.235790 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:25:00.251801 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:25:00.251883 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:25:00.268756 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:25:00.268830 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:25:00.284750 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:25:00.301518 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:25:00.301644 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:25:00.317809 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:25:00.317876 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:25:00.333642 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:25:00.333724 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:25:00.349961 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:25:00.350668 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:25:00.350785 systemd[1]: Stopped network-cleanup.service. Dec 13 14:25:00.363927 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:25:00.364064 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:25:00.378856 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:25:00.395644 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:25:00.407039 systemd[1]: Switching root. Dec 13 14:25:00.464076 systemd-journald[190]: Journal stopped Dec 13 14:25:05.158941 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:25:05.159074 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:25:05.159107 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:25:05.159135 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:25:05.159159 kernel: SELinux: policy capability open_perms=1 Dec 13 14:25:05.159187 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:25:05.159217 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:25:05.159238 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:25:05.159268 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:25:05.159290 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:25:05.159313 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:25:05.159345 systemd[1]: Successfully loaded SELinux policy in 110.067ms. Dec 13 14:25:05.159436 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.660ms. Dec 13 14:25:05.159464 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:25:05.159494 systemd[1]: Detected virtualization kvm. Dec 13 14:25:05.159518 systemd[1]: Detected architecture x86-64. Dec 13 14:25:05.159550 systemd[1]: Detected first boot. Dec 13 14:25:05.159573 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:25:05.159597 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:25:05.159619 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:25:05.159643 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:25:05.159672 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:25:05.159698 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:25:05.159729 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:25:05.159753 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:25:05.159775 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:25:05.159799 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:25:05.159828 systemd[1]: Created slice system-getty.slice. Dec 13 14:25:05.159852 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:25:05.159876 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:25:05.159903 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:25:05.159932 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:25:05.159956 systemd[1]: Created slice user.slice. Dec 13 14:25:05.159980 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:25:05.160003 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:25:05.160028 systemd[1]: Set up automount boot.automount. Dec 13 14:25:05.160052 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:25:05.160076 systemd[1]: Reached target integritysetup.target. Dec 13 14:25:05.160100 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:25:05.160131 systemd[1]: Reached target remote-fs.target. Dec 13 14:25:05.160156 systemd[1]: Reached target slices.target. Dec 13 14:25:05.160180 systemd[1]: Reached target swap.target. Dec 13 14:25:05.160204 systemd[1]: Reached target torcx.target. Dec 13 14:25:05.160231 systemd[1]: Reached target veritysetup.target. Dec 13 14:25:05.160255 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:25:05.160279 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:25:05.160302 kernel: kauditd_printk_skb: 52 callbacks suppressed Dec 13 14:25:05.160330 kernel: audit: type=1400 audit(1734099904.674:89): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:25:05.160382 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:25:05.160408 kernel: audit: type=1335 audit(1734099904.674:90): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:25:05.160432 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:25:05.160453 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:25:05.160475 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:25:05.160498 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:25:05.160519 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:25:05.160558 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:25:05.160580 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:25:05.160602 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:25:05.160625 systemd[1]: Mounting media.mount... Dec 13 14:25:05.160653 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:05.160677 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:25:05.160699 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:25:05.160722 systemd[1]: Mounting tmp.mount... Dec 13 14:25:05.160744 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:25:05.160773 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:05.160796 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:25:05.160818 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:25:05.160842 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:25:05.160865 systemd[1]: Starting modprobe@drm.service... Dec 13 14:25:05.160887 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:25:05.160909 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:25:05.160932 systemd[1]: Starting modprobe@loop.service... Dec 13 14:25:05.160953 kernel: fuse: init (API version 7.34) Dec 13 14:25:05.160982 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:25:05.161005 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:25:05.161027 kernel: loop: module loaded Dec 13 14:25:05.161050 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:25:05.161071 systemd[1]: Starting systemd-journald.service... Dec 13 14:25:05.161094 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:25:05.161117 kernel: audit: type=1305 audit(1734099905.141:91): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:25:05.161149 systemd-journald[1034]: Journal started Dec 13 14:25:05.161249 systemd-journald[1034]: Runtime Journal (/run/log/journal/9f3cf6a40ecfbe6d1f7fee88e0caa704) is 8.0M, max 148.8M, 140.8M free. Dec 13 14:25:04.674000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:25:04.674000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:25:05.141000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:25:05.141000 audit[1034]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffcef2c2510 a2=4000 a3=7ffcef2c25ac items=0 ppid=1 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:05.202713 kernel: audit: type=1300 audit(1734099905.141:91): arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffcef2c2510 a2=4000 a3=7ffcef2c25ac items=0 ppid=1 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:05.202805 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:25:05.202854 kernel: audit: type=1327 audit(1734099905.141:91): proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:25:05.141000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:25:05.229407 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:25:05.245401 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:25:05.265395 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:05.275392 systemd[1]: Started systemd-journald.service. Dec 13 14:25:05.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.306892 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:25:05.308396 kernel: audit: type=1130 audit(1734099905.282:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.314710 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:25:05.323717 systemd[1]: Mounted media.mount. Dec 13 14:25:05.331684 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:25:05.340724 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:25:05.349743 systemd[1]: Mounted tmp.mount. Dec 13 14:25:05.356919 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:25:05.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.366135 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:25:05.388436 kernel: audit: type=1130 audit(1734099905.364:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.397072 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:25:05.397416 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:25:05.419424 kernel: audit: type=1130 audit(1734099905.395:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.428103 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:25:05.428453 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:25:05.472173 kernel: audit: type=1130 audit(1734099905.426:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.472302 kernel: audit: type=1131 audit(1734099905.426:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.481003 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:25:05.481253 systemd[1]: Finished modprobe@drm.service. Dec 13 14:25:05.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.489988 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:25:05.490233 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:25:05.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.498995 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:25:05.499241 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:25:05.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.507958 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:25:05.508428 systemd[1]: Finished modprobe@loop.service. Dec 13 14:25:05.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.517059 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:25:05.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.526016 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:25:05.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.534983 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:25:05.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.544025 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:25:05.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.553113 systemd[1]: Reached target network-pre.target. Dec 13 14:25:05.563069 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:25:05.573191 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:25:05.580528 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:25:05.584477 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:25:05.593490 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:25:05.601494 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:25:05.603583 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:25:05.607742 systemd-journald[1034]: Time spent on flushing to /var/log/journal/9f3cf6a40ecfbe6d1f7fee88e0caa704 is 64.366ms for 1100 entries. Dec 13 14:25:05.607742 systemd-journald[1034]: System Journal (/var/log/journal/9f3cf6a40ecfbe6d1f7fee88e0caa704) is 8.0M, max 584.8M, 576.8M free. Dec 13 14:25:05.715762 systemd-journald[1034]: Received client request to flush runtime journal. Dec 13 14:25:05.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.619592 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:25:05.621677 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:25:05.631529 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:25:05.640747 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:25:05.717965 udevadm[1055]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:25:05.651445 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:25:05.659733 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:25:05.669043 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:25:05.678109 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:25:05.691560 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:25:05.712131 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:25:05.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.721314 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:25:05.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:05.731909 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:25:05.789071 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:25:05.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:06.338796 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:25:06.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:06.349471 systemd[1]: Starting systemd-udevd.service... Dec 13 14:25:06.374605 systemd-udevd[1066]: Using default interface naming scheme 'v252'. Dec 13 14:25:06.432524 systemd[1]: Started systemd-udevd.service. Dec 13 14:25:06.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:06.443496 systemd[1]: Starting systemd-networkd.service... Dec 13 14:25:06.464034 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:25:06.515297 systemd[1]: Found device dev-ttyS0.device. Dec 13 14:25:06.538263 systemd[1]: Started systemd-userdbd.service. Dec 13 14:25:06.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:06.643386 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:25:06.672398 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:25:06.689379 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 14:25:06.690684 systemd-networkd[1078]: lo: Link UP Dec 13 14:25:06.690699 systemd-networkd[1078]: lo: Gained carrier Dec 13 14:25:06.692013 systemd-networkd[1078]: Enumeration completed Dec 13 14:25:06.692221 systemd[1]: Started systemd-networkd.service. Dec 13 14:25:06.692757 systemd-networkd[1078]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:25:06.699509 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 14:25:06.700114 systemd-networkd[1078]: eth0: Link UP Dec 13 14:25:06.700126 systemd-networkd[1078]: eth0: Gained carrier Dec 13 14:25:06.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:06.709553 systemd-networkd[1078]: eth0: DHCPv4 address 10.128.0.74/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 14:25:06.708000 audit[1088]: AVC avc: denied { confidentiality } for pid=1088 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:25:06.708000 audit[1088]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5651261135e0 a1=337fc a2=7fbb1e7f0bc5 a3=5 items=110 ppid=1066 pid=1088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:06.708000 audit: CWD cwd="/" Dec 13 14:25:06.708000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=1 name=(null) inode=14580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=2 name=(null) inode=14580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=3 name=(null) inode=14581 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=4 name=(null) inode=14580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=5 name=(null) inode=14582 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=6 name=(null) inode=14580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=7 name=(null) inode=14583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=8 name=(null) inode=14583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=9 name=(null) inode=14584 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=10 name=(null) inode=14583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=11 name=(null) inode=14585 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=12 name=(null) inode=14583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=13 name=(null) inode=14586 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=14 name=(null) inode=14583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=15 name=(null) inode=14587 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=16 name=(null) inode=14583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=17 name=(null) inode=14588 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=18 name=(null) inode=14580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=19 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=20 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=21 name=(null) inode=14590 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=22 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=23 name=(null) inode=14591 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=24 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=25 name=(null) inode=14592 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=26 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=27 name=(null) inode=14593 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=28 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=29 name=(null) inode=14594 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=30 name=(null) inode=14580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=31 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=32 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=33 name=(null) inode=14596 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=34 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=35 name=(null) inode=14597 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=36 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=37 name=(null) inode=14598 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=38 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=39 name=(null) inode=14599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=40 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=41 name=(null) inode=14600 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=42 name=(null) inode=14580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=43 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=44 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=45 name=(null) inode=14602 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=46 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=47 name=(null) inode=14603 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=48 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=49 name=(null) inode=14604 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=50 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=51 name=(null) inode=14605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=52 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=53 name=(null) inode=14606 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=55 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=56 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=57 name=(null) inode=14608 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=58 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=59 name=(null) inode=14609 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=60 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=61 name=(null) inode=14610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=62 name=(null) inode=14610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=63 name=(null) inode=14611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=64 name=(null) inode=14610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=65 name=(null) inode=14612 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=66 name=(null) inode=14610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=67 name=(null) inode=14613 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=68 name=(null) inode=14610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=69 name=(null) inode=14614 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=70 name=(null) inode=14610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=71 name=(null) inode=14615 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=72 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=73 name=(null) inode=14616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=74 name=(null) inode=14616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=75 name=(null) inode=14617 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=76 name=(null) inode=14616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=77 name=(null) inode=14618 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=78 name=(null) inode=14616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=79 name=(null) inode=14619 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=80 name=(null) inode=14616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=81 name=(null) inode=14620 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=82 name=(null) inode=14616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=83 name=(null) inode=14621 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=84 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=85 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=86 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=87 name=(null) inode=14623 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=88 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=89 name=(null) inode=14624 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=90 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=91 name=(null) inode=14625 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=92 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=93 name=(null) inode=14626 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=94 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=95 name=(null) inode=14627 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=96 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=97 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=98 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=99 name=(null) inode=14629 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=100 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=101 name=(null) inode=14630 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=102 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=103 name=(null) inode=14631 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=104 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=105 name=(null) inode=14632 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=106 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=107 name=(null) inode=14633 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PATH item=109 name=(null) inode=13817 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:06.708000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:25:06.806967 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1080) Dec 13 14:25:06.853510 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 13 14:25:06.858708 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 14:25:06.858738 kernel: EDAC MC: Ver: 3.0.0 Dec 13 14:25:06.883390 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:25:06.901166 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 14:25:06.909146 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:25:06.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:06.920490 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:25:06.950346 lvm[1104]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:25:06.980041 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:25:06.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:06.988858 systemd[1]: Reached target cryptsetup.target. Dec 13 14:25:06.999119 systemd[1]: Starting lvm2-activation.service... Dec 13 14:25:07.005773 lvm[1106]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:25:07.034040 systemd[1]: Finished lvm2-activation.service. Dec 13 14:25:07.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.042913 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:25:07.051569 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:25:07.051621 systemd[1]: Reached target local-fs.target. Dec 13 14:25:07.060547 systemd[1]: Reached target machines.target. Dec 13 14:25:07.070251 systemd[1]: Starting ldconfig.service... Dec 13 14:25:07.078561 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:07.078668 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:07.080404 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:25:07.089317 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:25:07.101473 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:25:07.104395 systemd[1]: Starting systemd-sysext.service... Dec 13 14:25:07.105165 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1109 (bootctl) Dec 13 14:25:07.108491 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:25:07.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.128912 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:25:07.137650 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:25:07.146934 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:25:07.147292 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:25:07.175396 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:25:07.279565 systemd-fsck[1122]: fsck.fat 4.2 (2021-01-31) Dec 13 14:25:07.279565 systemd-fsck[1122]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 14:25:07.283573 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:25:07.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.295854 systemd[1]: Mounting boot.mount... Dec 13 14:25:07.313082 systemd[1]: Mounted boot.mount. Dec 13 14:25:07.337671 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:25:07.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.513834 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:25:07.558599 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:25:07.562083 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:25:07.563995 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:25:07.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.592096 (sd-sysext)[1132]: Using extensions 'kubernetes'. Dec 13 14:25:07.592896 (sd-sysext)[1132]: Merged extensions into '/usr'. Dec 13 14:25:07.623669 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:07.626227 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:25:07.633933 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:07.636456 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:25:07.646223 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:25:07.655861 systemd[1]: Starting modprobe@loop.service... Dec 13 14:25:07.663635 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:07.663894 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:07.664119 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:07.669677 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:25:07.677050 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:25:07.677348 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:25:07.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.687694 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:25:07.687971 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:25:07.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.697658 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:25:07.697971 systemd[1]: Finished modprobe@loop.service. Dec 13 14:25:07.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.707320 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:25:07.707550 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:25:07.709249 systemd[1]: Finished systemd-sysext.service. Dec 13 14:25:07.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.720491 systemd[1]: Starting ensure-sysext.service... Dec 13 14:25:07.729244 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:25:07.739284 systemd[1]: Reloading. Dec 13 14:25:07.751131 systemd-tmpfiles[1147]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:25:07.753649 systemd-tmpfiles[1147]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:25:07.758555 systemd-tmpfiles[1147]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:25:07.840791 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2024-12-13T14:25:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:25:07.846536 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2024-12-13T14:25:07Z" level=info msg="torcx already run" Dec 13 14:25:07.848856 ldconfig[1108]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:25:08.058381 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:25:08.058412 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:25:08.085509 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:25:08.165551 systemd[1]: Finished ldconfig.service. Dec 13 14:25:08.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:08.174276 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:25:08.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:08.187841 systemd[1]: Starting audit-rules.service... Dec 13 14:25:08.196442 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:25:08.207084 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 14:25:08.218033 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:25:08.229888 systemd[1]: Starting systemd-resolved.service... Dec 13 14:25:08.241042 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:25:08.250746 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:25:08.259935 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:25:08.265000 audit[1246]: SYSTEM_BOOT pid=1246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:25:08.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:08.269457 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 14:25:08.269848 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 14:25:08.273000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:25:08.273000 audit[1251]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff0ba88c50 a2=420 a3=0 items=0 ppid=1219 pid=1251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:08.273000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:25:08.274864 augenrules[1251]: No rules Dec 13 14:25:08.279400 systemd[1]: Finished audit-rules.service. Dec 13 14:25:08.290829 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:25:08.307673 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:25:08.312523 systemd-networkd[1078]: eth0: Gained IPv6LL Dec 13 14:25:08.319994 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:08.320563 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:08.322963 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:25:08.332729 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:25:08.341669 systemd[1]: Starting modprobe@loop.service... Dec 13 14:25:08.350896 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 14:25:08.359585 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:08.359868 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:08.362674 systemd[1]: Starting systemd-update-done.service... Dec 13 14:25:08.363972 enable-oslogin[1265]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 14:25:08.369505 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:25:08.369746 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:08.372164 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:25:08.372455 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:25:08.381325 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:25:08.381619 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:25:08.391298 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:25:08.391655 systemd[1]: Finished modprobe@loop.service. Dec 13 14:25:08.401290 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 14:25:08.401691 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 14:25:08.411457 systemd[1]: Finished systemd-update-done.service. Dec 13 14:25:08.420202 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:25:08.420418 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:25:08.426127 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:08.427826 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:08.430317 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:25:08.439827 systemd[1]: Starting modprobe@drm.service... Dec 13 14:25:08.448858 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:25:08.457750 systemd[1]: Starting modprobe@loop.service... Dec 13 14:25:08.466874 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 14:25:08.475637 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:08.475894 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:08.478442 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:25:08.484384 enable-oslogin[1278]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 14:25:08.486579 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:25:08.486818 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:08.489543 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:25:08.489817 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:25:08.499334 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:25:08.499629 systemd[1]: Finished modprobe@drm.service. Dec 13 14:25:08.509306 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:25:08.509605 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:25:08.518218 systemd-resolved[1235]: Positive Trust Anchors: Dec 13 14:25:08.519249 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:25:08.519535 systemd[1]: Finished modprobe@loop.service. Dec 13 14:25:08.520433 systemd-resolved[1235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:25:08.520580 systemd-resolved[1235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:25:08.526684 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 14:25:08.527069 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 14:25:08.537259 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:25:08.542260 systemd-timesyncd[1241]: Contacted time server 169.254.169.254:123 (169.254.169.254). Dec 13 14:25:08.542335 systemd-timesyncd[1241]: Initial clock synchronization to Fri 2024-12-13 14:25:08.335813 UTC. Dec 13 14:25:08.548165 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:25:08.558134 systemd[1]: Reached target time-set.target. Dec 13 14:25:08.559812 systemd-resolved[1235]: Defaulting to hostname 'linux'. Dec 13 14:25:08.566673 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:25:08.566874 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:25:08.567550 systemd[1]: Started systemd-resolved.service. Dec 13 14:25:08.576300 systemd[1]: Reached target network.target. Dec 13 14:25:08.585593 systemd[1]: Reached target network-online.target. Dec 13 14:25:08.594553 systemd[1]: Reached target nss-lookup.target. Dec 13 14:25:08.603538 systemd[1]: Reached target sysinit.target. Dec 13 14:25:08.611678 systemd[1]: Started motdgen.path. Dec 13 14:25:08.618627 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:25:08.628773 systemd[1]: Started logrotate.timer. Dec 13 14:25:08.635695 systemd[1]: Started mdadm.timer. Dec 13 14:25:08.642580 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:25:08.651565 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:25:08.651617 systemd[1]: Reached target paths.target. Dec 13 14:25:08.658545 systemd[1]: Reached target timers.target. Dec 13 14:25:08.665997 systemd[1]: Listening on dbus.socket. Dec 13 14:25:08.675331 systemd[1]: Starting docker.socket... Dec 13 14:25:08.685121 systemd[1]: Listening on sshd.socket. Dec 13 14:25:08.692688 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:08.693738 systemd[1]: Finished ensure-sysext.service. Dec 13 14:25:08.702817 systemd[1]: Listening on docker.socket. Dec 13 14:25:08.710684 systemd[1]: Reached target sockets.target. Dec 13 14:25:08.719522 systemd[1]: Reached target basic.target. Dec 13 14:25:08.726791 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:25:08.726887 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:25:08.726931 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:25:08.728682 systemd[1]: Starting containerd.service... Dec 13 14:25:08.737263 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:25:08.747477 systemd[1]: Starting dbus.service... Dec 13 14:25:08.755192 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:25:08.764648 systemd[1]: Starting extend-filesystems.service... Dec 13 14:25:08.768229 jq[1292]: false Dec 13 14:25:08.771525 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:25:08.774686 systemd[1]: Starting kubelet.service... Dec 13 14:25:08.784035 systemd[1]: Starting motdgen.service... Dec 13 14:25:08.793799 systemd[1]: Starting oem-gce.service... Dec 13 14:25:08.805836 systemd[1]: Starting prepare-helm.service... Dec 13 14:25:08.816767 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:25:08.825826 systemd[1]: Starting sshd-keygen.service... Dec 13 14:25:08.842586 systemd[1]: Starting systemd-logind.service... Dec 13 14:25:08.850535 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:08.850662 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 13 14:25:08.853003 systemd[1]: Starting update-engine.service... Dec 13 14:25:08.861183 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:25:08.873332 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:25:08.874496 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:25:08.876552 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:25:08.877908 jq[1320]: true Dec 13 14:25:08.885746 systemd[1]: Finished motdgen.service. Dec 13 14:25:08.896011 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:25:08.896694 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:25:08.919903 extend-filesystems[1293]: Found loop1 Dec 13 14:25:08.919903 extend-filesystems[1293]: Found sda Dec 13 14:25:08.919903 extend-filesystems[1293]: Found sda1 Dec 13 14:25:08.919903 extend-filesystems[1293]: Found sda2 Dec 13 14:25:08.919903 extend-filesystems[1293]: Found sda3 Dec 13 14:25:08.919903 extend-filesystems[1293]: Found usr Dec 13 14:25:08.919903 extend-filesystems[1293]: Found sda4 Dec 13 14:25:08.919903 extend-filesystems[1293]: Found sda6 Dec 13 14:25:08.919903 extend-filesystems[1293]: Found sda7 Dec 13 14:25:08.919903 extend-filesystems[1293]: Found sda9 Dec 13 14:25:08.919903 extend-filesystems[1293]: Checking size of /dev/sda9 Dec 13 14:25:09.007827 mkfs.ext4[1336]: mke2fs 1.46.5 (30-Dec-2021) Dec 13 14:25:09.007827 mkfs.ext4[1336]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Dec 13 14:25:09.007827 mkfs.ext4[1336]: Creating filesystem with 262144 4k blocks and 65536 inodes Dec 13 14:25:09.007827 mkfs.ext4[1336]: Filesystem UUID: 54a31b87-a49e-464f-9634-77405b8bc5da Dec 13 14:25:09.007827 mkfs.ext4[1336]: Superblock backups stored on blocks: Dec 13 14:25:09.007827 mkfs.ext4[1336]: 32768, 98304, 163840, 229376 Dec 13 14:25:09.007827 mkfs.ext4[1336]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 14:25:09.007827 mkfs.ext4[1336]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 14:25:09.007827 mkfs.ext4[1336]: Creating journal (8192 blocks): done Dec 13 14:25:09.007827 mkfs.ext4[1336]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 14:25:09.016649 umount[1350]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Dec 13 14:25:09.034187 extend-filesystems[1293]: Resized partition /dev/sda9 Dec 13 14:25:09.051531 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 14:25:09.053840 jq[1330]: true Dec 13 14:25:09.078571 extend-filesystems[1353]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:25:09.092855 update_engine[1319]: I1213 14:25:09.092788 1319 main.cc:92] Flatcar Update Engine starting Dec 13 14:25:09.101397 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Dec 13 14:25:09.109656 dbus-daemon[1291]: [system] SELinux support is enabled Dec 13 14:25:09.109980 systemd[1]: Started dbus.service. Dec 13 14:25:09.118383 update_engine[1319]: I1213 14:25:09.118207 1319 update_check_scheduler.cc:74] Next update check in 9m59s Dec 13 14:25:09.121437 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:25:09.122816 dbus-daemon[1291]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1078 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:25:09.123212 tar[1329]: linux-amd64/helm Dec 13 14:25:09.130175 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:25:09.130226 systemd[1]: Reached target system-config.target. Dec 13 14:25:09.139641 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:25:09.139683 systemd[1]: Reached target user-config.target. Dec 13 14:25:09.149628 systemd[1]: Started update-engine.service. Dec 13 14:25:09.149916 dbus-daemon[1291]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:25:09.161430 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Dec 13 14:25:09.178301 extend-filesystems[1353]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 14:25:09.178301 extend-filesystems[1353]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 13 14:25:09.178301 extend-filesystems[1353]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Dec 13 14:25:09.229780 env[1331]: time="2024-12-13T14:25:09.209366468Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:25:09.230213 extend-filesystems[1293]: Resized filesystem in /dev/sda9 Dec 13 14:25:09.179626 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:25:09.180004 systemd[1]: Finished extend-filesystems.service. Dec 13 14:25:09.202037 systemd[1]: Started locksmithd.service. Dec 13 14:25:09.218807 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:25:09.380528 bash[1381]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:25:09.381601 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:25:09.382273 coreos-metadata[1290]: Dec 13 14:25:09.382 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 13 14:25:09.400316 coreos-metadata[1290]: Dec 13 14:25:09.400 INFO Fetch failed with 404: resource not found Dec 13 14:25:09.400316 coreos-metadata[1290]: Dec 13 14:25:09.400 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 13 14:25:09.401578 coreos-metadata[1290]: Dec 13 14:25:09.401 INFO Fetch successful Dec 13 14:25:09.401829 coreos-metadata[1290]: Dec 13 14:25:09.401 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 13 14:25:09.404431 coreos-metadata[1290]: Dec 13 14:25:09.402 INFO Fetch failed with 404: resource not found Dec 13 14:25:09.404431 coreos-metadata[1290]: Dec 13 14:25:09.402 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 13 14:25:09.404431 coreos-metadata[1290]: Dec 13 14:25:09.404 INFO Fetch failed with 404: resource not found Dec 13 14:25:09.404431 coreos-metadata[1290]: Dec 13 14:25:09.404 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 13 14:25:09.405314 coreos-metadata[1290]: Dec 13 14:25:09.405 INFO Fetch successful Dec 13 14:25:09.407597 unknown[1290]: wrote ssh authorized keys file for user: core Dec 13 14:25:09.435423 update-ssh-keys[1385]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:25:09.436650 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:25:09.477677 systemd-logind[1316]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:25:09.478769 systemd-logind[1316]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 14:25:09.478947 systemd-logind[1316]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:25:09.485504 systemd-logind[1316]: New seat seat0. Dec 13 14:25:09.492726 env[1331]: time="2024-12-13T14:25:09.479205819Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:25:09.493553 env[1331]: time="2024-12-13T14:25:09.493516952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:09.497510 env[1331]: time="2024-12-13T14:25:09.496516326Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:25:09.497510 env[1331]: time="2024-12-13T14:25:09.496560541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:09.497510 env[1331]: time="2024-12-13T14:25:09.496942540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:25:09.497510 env[1331]: time="2024-12-13T14:25:09.496974957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:09.497510 env[1331]: time="2024-12-13T14:25:09.496996940Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:25:09.497510 env[1331]: time="2024-12-13T14:25:09.497014083Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:09.497510 env[1331]: time="2024-12-13T14:25:09.497133799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:09.497510 env[1331]: time="2024-12-13T14:25:09.497474930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:09.498418 systemd[1]: Started systemd-logind.service. Dec 13 14:25:09.498883 env[1331]: time="2024-12-13T14:25:09.498826150Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:25:09.498883 env[1331]: time="2024-12-13T14:25:09.498858929Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:25:09.499038 env[1331]: time="2024-12-13T14:25:09.498947006Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:25:09.499038 env[1331]: time="2024-12-13T14:25:09.498967560Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:25:09.512374 env[1331]: time="2024-12-13T14:25:09.509486044Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:25:09.512374 env[1331]: time="2024-12-13T14:25:09.509542425Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:25:09.512374 env[1331]: time="2024-12-13T14:25:09.509567363Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:25:09.512374 env[1331]: time="2024-12-13T14:25:09.509631237Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:25:09.512374 env[1331]: time="2024-12-13T14:25:09.509656360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:25:09.512374 env[1331]: time="2024-12-13T14:25:09.509746942Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:25:09.512374 env[1331]: time="2024-12-13T14:25:09.509771879Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:25:09.512374 env[1331]: time="2024-12-13T14:25:09.509794413Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:25:09.512374 env[1331]: time="2024-12-13T14:25:09.509815770Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:25:09.512374 env[1331]: time="2024-12-13T14:25:09.509840601Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:25:09.512374 env[1331]: time="2024-12-13T14:25:09.509862167Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:25:09.512374 env[1331]: time="2024-12-13T14:25:09.509884516Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:25:09.512374 env[1331]: time="2024-12-13T14:25:09.510036617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:25:09.512374 env[1331]: time="2024-12-13T14:25:09.510137989Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:25:09.513060 env[1331]: time="2024-12-13T14:25:09.510718810Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:25:09.513060 env[1331]: time="2024-12-13T14:25:09.510759968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:25:09.513060 env[1331]: time="2024-12-13T14:25:09.510783145Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:25:09.513060 env[1331]: time="2024-12-13T14:25:09.510846182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:25:09.513060 env[1331]: time="2024-12-13T14:25:09.510868301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:25:09.513060 env[1331]: time="2024-12-13T14:25:09.510889076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:25:09.513060 env[1331]: time="2024-12-13T14:25:09.510913882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:25:09.513060 env[1331]: time="2024-12-13T14:25:09.510935011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:25:09.513060 env[1331]: time="2024-12-13T14:25:09.510953933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:25:09.513060 env[1331]: time="2024-12-13T14:25:09.510972543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:25:09.513060 env[1331]: time="2024-12-13T14:25:09.510989144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:25:09.513060 env[1331]: time="2024-12-13T14:25:09.511030339Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:25:09.513060 env[1331]: time="2024-12-13T14:25:09.511204719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:25:09.513060 env[1331]: time="2024-12-13T14:25:09.511229349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:25:09.513060 env[1331]: time="2024-12-13T14:25:09.511250808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:25:09.513898 env[1331]: time="2024-12-13T14:25:09.511269897Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:25:09.513898 env[1331]: time="2024-12-13T14:25:09.511295428Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:25:09.513898 env[1331]: time="2024-12-13T14:25:09.511313565Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:25:09.513898 env[1331]: time="2024-12-13T14:25:09.511366569Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:25:09.513898 env[1331]: time="2024-12-13T14:25:09.511418358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:25:09.513465 systemd[1]: Started containerd.service. Dec 13 14:25:09.514237 env[1331]: time="2024-12-13T14:25:09.511725723Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:25:09.514237 env[1331]: time="2024-12-13T14:25:09.511814778Z" level=info msg="Connect containerd service" Dec 13 14:25:09.514237 env[1331]: time="2024-12-13T14:25:09.511870127Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:25:09.514237 env[1331]: time="2024-12-13T14:25:09.512712346Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:25:09.514237 env[1331]: time="2024-12-13T14:25:09.513101718Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:25:09.514237 env[1331]: time="2024-12-13T14:25:09.513193818Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:25:09.514237 env[1331]: time="2024-12-13T14:25:09.513604133Z" level=info msg="containerd successfully booted in 0.339727s" Dec 13 14:25:09.534134 env[1331]: time="2024-12-13T14:25:09.533537228Z" level=info msg="Start subscribing containerd event" Dec 13 14:25:09.534134 env[1331]: time="2024-12-13T14:25:09.533652525Z" level=info msg="Start recovering state" Dec 13 14:25:09.534134 env[1331]: time="2024-12-13T14:25:09.533765228Z" level=info msg="Start event monitor" Dec 13 14:25:09.534134 env[1331]: time="2024-12-13T14:25:09.533800272Z" level=info msg="Start snapshots syncer" Dec 13 14:25:09.534134 env[1331]: time="2024-12-13T14:25:09.533823092Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:25:09.534134 env[1331]: time="2024-12-13T14:25:09.533836778Z" level=info msg="Start streaming server" Dec 13 14:25:09.837536 dbus-daemon[1291]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:25:09.837734 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:25:09.838400 dbus-daemon[1291]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1374 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:25:09.853372 systemd[1]: Starting polkit.service... Dec 13 14:25:09.926527 polkitd[1398]: Started polkitd version 121 Dec 13 14:25:09.955196 polkitd[1398]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:25:09.957976 polkitd[1398]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:25:09.966130 polkitd[1398]: Finished loading, compiling and executing 2 rules Dec 13 14:25:09.966888 dbus-daemon[1291]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:25:09.967139 systemd[1]: Started polkit.service. Dec 13 14:25:09.967380 polkitd[1398]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:25:09.994260 systemd-hostnamed[1374]: Hostname set to (transient) Dec 13 14:25:09.997310 systemd-resolved[1235]: System hostname changed to 'ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal'. Dec 13 14:25:10.993063 tar[1329]: linux-amd64/LICENSE Dec 13 14:25:10.993746 tar[1329]: linux-amd64/README.md Dec 13 14:25:11.010547 systemd[1]: Finished prepare-helm.service. Dec 13 14:25:11.079645 systemd[1]: Started kubelet.service. Dec 13 14:25:11.839379 locksmithd[1367]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:25:12.688313 kubelet[1412]: E1213 14:25:12.688233 1412 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:25:12.697661 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:25:12.697978 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:25:15.226288 sshd_keygen[1332]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:25:15.275644 systemd[1]: Finished sshd-keygen.service. Dec 13 14:25:15.287065 systemd[1]: Starting issuegen.service... Dec 13 14:25:15.298653 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:25:15.299030 systemd[1]: Finished issuegen.service. Dec 13 14:25:15.310241 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:25:15.326372 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:25:15.337255 systemd[1]: Started getty@tty1.service. Dec 13 14:25:15.347277 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:25:15.355906 systemd[1]: Reached target getty.target. Dec 13 14:25:15.719160 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Dec 13 14:25:17.324673 systemd[1]: Created slice system-sshd.slice. Dec 13 14:25:17.335661 systemd[1]: Started sshd@0-10.128.0.74:22-139.178.68.195:45532.service. Dec 13 14:25:17.654804 sshd[1443]: Accepted publickey for core from 139.178.68.195 port 45532 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:25:17.659730 sshd[1443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:17.683051 systemd[1]: Created slice user-500.slice. Dec 13 14:25:17.692403 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:25:17.706834 systemd-logind[1316]: New session 1 of user core. Dec 13 14:25:17.716610 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:25:17.728401 systemd[1]: Starting user@500.service... Dec 13 14:25:17.750614 (systemd)[1448]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:17.873428 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 14:25:17.896578 systemd-nspawn[1455]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Dec 13 14:25:17.896578 systemd-nspawn[1455]: Press ^] three times within 1s to kill container. Dec 13 14:25:17.896969 systemd[1448]: Queued start job for default target default.target. Dec 13 14:25:17.897890 systemd[1448]: Reached target paths.target. Dec 13 14:25:17.897920 systemd[1448]: Reached target sockets.target. Dec 13 14:25:17.897942 systemd[1448]: Reached target timers.target. Dec 13 14:25:17.897962 systemd[1448]: Reached target basic.target. Dec 13 14:25:17.898036 systemd[1448]: Reached target default.target. Dec 13 14:25:17.898095 systemd[1448]: Startup finished in 136ms. Dec 13 14:25:17.898277 systemd[1]: Started user@500.service. Dec 13 14:25:17.907152 systemd[1]: Started session-1.scope. Dec 13 14:25:17.916384 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:25:17.995585 systemd[1]: Started oem-gce.service. Dec 13 14:25:18.002985 systemd[1]: Reached target multi-user.target. Dec 13 14:25:18.013962 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:25:18.027285 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:25:18.027613 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:25:18.037697 systemd[1]: Startup finished in 9.010s (kernel) + 17.302s (userspace) = 26.313s. Dec 13 14:25:18.057336 systemd-nspawn[1455]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 13 14:25:18.057336 systemd-nspawn[1455]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 13 14:25:18.057336 systemd-nspawn[1455]: + /usr/bin/google_instance_setup Dec 13 14:25:18.142053 systemd[1]: Started sshd@1-10.128.0.74:22-139.178.68.195:45536.service. Dec 13 14:25:18.438778 sshd[1466]: Accepted publickey for core from 139.178.68.195 port 45536 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:25:18.439798 sshd[1466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:18.446430 systemd-logind[1316]: New session 2 of user core. Dec 13 14:25:18.447785 systemd[1]: Started session-2.scope. Dec 13 14:25:18.652651 sshd[1466]: pam_unix(sshd:session): session closed for user core Dec 13 14:25:18.656743 systemd[1]: sshd@1-10.128.0.74:22-139.178.68.195:45536.service: Deactivated successfully. Dec 13 14:25:18.657884 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:25:18.659244 systemd-logind[1316]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:25:18.660946 systemd-logind[1316]: Removed session 2. Dec 13 14:25:18.694641 systemd[1]: Started sshd@2-10.128.0.74:22-139.178.68.195:45546.service. Dec 13 14:25:18.732382 instance-setup[1465]: INFO Running google_set_multiqueue. Dec 13 14:25:18.748828 instance-setup[1465]: INFO Set channels for eth0 to 2. Dec 13 14:25:18.752925 instance-setup[1465]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 13 14:25:18.754381 instance-setup[1465]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 13 14:25:18.754828 instance-setup[1465]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 13 14:25:18.756383 instance-setup[1465]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 13 14:25:18.756735 instance-setup[1465]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 13 14:25:18.758170 instance-setup[1465]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 13 14:25:18.758647 instance-setup[1465]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 13 14:25:18.760144 instance-setup[1465]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 13 14:25:18.772009 instance-setup[1465]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 13 14:25:18.772418 instance-setup[1465]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 13 14:25:18.817939 systemd-nspawn[1455]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 13 14:25:18.999270 sshd[1475]: Accepted publickey for core from 139.178.68.195 port 45546 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:25:19.000896 sshd[1475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:19.010059 systemd[1]: Started session-3.scope. Dec 13 14:25:19.011936 systemd-logind[1316]: New session 3 of user core. Dec 13 14:25:19.177012 startup-script[1505]: INFO Starting startup scripts. Dec 13 14:25:19.190292 startup-script[1505]: INFO No startup scripts found in metadata. Dec 13 14:25:19.190530 startup-script[1505]: INFO Finished running startup scripts. Dec 13 14:25:19.204692 sshd[1475]: pam_unix(sshd:session): session closed for user core Dec 13 14:25:19.211423 systemd-logind[1316]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:25:19.213069 systemd[1]: sshd@2-10.128.0.74:22-139.178.68.195:45546.service: Deactivated successfully. Dec 13 14:25:19.214304 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:25:19.215510 systemd-logind[1316]: Removed session 3. Dec 13 14:25:19.236494 systemd-nspawn[1455]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 13 14:25:19.236494 systemd-nspawn[1455]: + daemon_pids=() Dec 13 14:25:19.237211 systemd-nspawn[1455]: + for d in accounts clock_skew network Dec 13 14:25:19.237211 systemd-nspawn[1455]: + daemon_pids+=($!) Dec 13 14:25:19.237211 systemd-nspawn[1455]: + for d in accounts clock_skew network Dec 13 14:25:19.237392 systemd-nspawn[1455]: + daemon_pids+=($!) Dec 13 14:25:19.237392 systemd-nspawn[1455]: + for d in accounts clock_skew network Dec 13 14:25:19.237715 systemd-nspawn[1455]: + daemon_pids+=($!) Dec 13 14:25:19.237846 systemd-nspawn[1455]: + NOTIFY_SOCKET=/run/systemd/notify Dec 13 14:25:19.237929 systemd-nspawn[1455]: + /usr/bin/systemd-notify --ready Dec 13 14:25:19.238050 systemd-nspawn[1455]: + /usr/bin/google_accounts_daemon Dec 13 14:25:19.238584 systemd-nspawn[1455]: + /usr/bin/google_network_daemon Dec 13 14:25:19.238947 systemd-nspawn[1455]: + /usr/bin/google_clock_skew_daemon Dec 13 14:25:19.248718 systemd[1]: Started sshd@3-10.128.0.74:22-139.178.68.195:45560.service. Dec 13 14:25:19.329788 systemd-nspawn[1455]: + wait -n 36 37 38 Dec 13 14:25:19.562448 sshd[1517]: Accepted publickey for core from 139.178.68.195 port 45560 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:25:19.563683 sshd[1517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:19.571140 systemd-logind[1316]: New session 4 of user core. Dec 13 14:25:19.571923 systemd[1]: Started session-4.scope. Dec 13 14:25:19.778701 sshd[1517]: pam_unix(sshd:session): session closed for user core Dec 13 14:25:19.787520 systemd[1]: sshd@3-10.128.0.74:22-139.178.68.195:45560.service: Deactivated successfully. Dec 13 14:25:19.789087 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:25:19.789090 systemd-logind[1316]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:25:19.791026 systemd-logind[1316]: Removed session 4. Dec 13 14:25:19.823891 systemd[1]: Started sshd@4-10.128.0.74:22-139.178.68.195:45568.service. Dec 13 14:25:19.857606 google-clock-skew[1514]: INFO Starting Google Clock Skew daemon. Dec 13 14:25:19.886069 google-clock-skew[1514]: INFO Clock drift token has changed: 0. Dec 13 14:25:19.897001 systemd-nspawn[1455]: hwclock: Cannot access the Hardware Clock via any known method. Dec 13 14:25:19.897306 systemd-nspawn[1455]: hwclock: Use the --verbose option to see the details of our search for an access method. Dec 13 14:25:19.898371 google-clock-skew[1514]: WARNING Failed to sync system time with hardware clock. Dec 13 14:25:20.036068 google-networking[1515]: INFO Starting Google Networking daemon. Dec 13 14:25:20.106302 groupadd[1534]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 13 14:25:20.110395 groupadd[1534]: group added to /etc/gshadow: name=google-sudoers Dec 13 14:25:20.114571 groupadd[1534]: new group: name=google-sudoers, GID=1000 Dec 13 14:25:20.131073 sshd[1526]: Accepted publickey for core from 139.178.68.195 port 45568 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:25:20.132337 google-accounts[1513]: INFO Starting Google Accounts daemon. Dec 13 14:25:20.133499 sshd[1526]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:20.142374 systemd[1]: Started session-5.scope. Dec 13 14:25:20.144397 systemd-logind[1316]: New session 5 of user core. Dec 13 14:25:20.170433 google-accounts[1513]: WARNING OS Login not installed. Dec 13 14:25:20.171702 google-accounts[1513]: INFO Creating a new user account for 0. Dec 13 14:25:20.177729 systemd-nspawn[1455]: useradd: invalid user name '0': use --badname to ignore Dec 13 14:25:20.178515 google-accounts[1513]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 13 14:25:20.329912 sudo[1546]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 14:25:20.330341 sudo[1546]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:25:20.339645 dbus-daemon[1291]: \xd0\xed\xa1\xb7kU: received setenforce notice (enforcing=933591744) Dec 13 14:25:20.341840 sudo[1546]: pam_unix(sudo:session): session closed for user root Dec 13 14:25:20.386158 sshd[1526]: pam_unix(sshd:session): session closed for user core Dec 13 14:25:20.391730 systemd[1]: sshd@4-10.128.0.74:22-139.178.68.195:45568.service: Deactivated successfully. Dec 13 14:25:20.393034 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:25:20.394757 systemd-logind[1316]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:25:20.396673 systemd-logind[1316]: Removed session 5. Dec 13 14:25:20.429978 systemd[1]: Started sshd@5-10.128.0.74:22-139.178.68.195:45584.service. Dec 13 14:25:20.719829 sshd[1550]: Accepted publickey for core from 139.178.68.195 port 45584 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:25:20.721649 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:20.728571 systemd[1]: Started session-6.scope. Dec 13 14:25:20.728904 systemd-logind[1316]: New session 6 of user core. Dec 13 14:25:20.897012 sudo[1555]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 14:25:20.897479 sudo[1555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:25:20.902055 sudo[1555]: pam_unix(sudo:session): session closed for user root Dec 13 14:25:20.914764 sudo[1554]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 14:25:20.915183 sudo[1554]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:25:20.928531 systemd[1]: Stopping audit-rules.service... Dec 13 14:25:20.929000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 14:25:20.936435 kernel: kauditd_printk_skb: 158 callbacks suppressed Dec 13 14:25:20.936514 kernel: audit: type=1305 audit(1734099920.929:140): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 14:25:20.970422 kernel: audit: type=1300 audit(1734099920.929:140): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd190f02a0 a2=420 a3=0 items=0 ppid=1 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:20.929000 audit[1558]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd190f02a0 a2=420 a3=0 items=0 ppid=1 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:20.951740 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 14:25:20.970915 auditctl[1558]: No rules Dec 13 14:25:20.952118 systemd[1]: Stopped audit-rules.service. Dec 13 14:25:20.955877 systemd[1]: Starting audit-rules.service... Dec 13 14:25:20.929000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Dec 13 14:25:20.991084 kernel: audit: type=1327 audit(1734099920.929:140): proctitle=2F7362696E2F617564697463746C002D44 Dec 13 14:25:21.013278 kernel: audit: type=1131 audit(1734099920.950:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:20.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:21.024848 augenrules[1576]: No rules Dec 13 14:25:21.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:21.027459 sudo[1554]: pam_unix(sudo:session): session closed for user root Dec 13 14:25:21.025834 systemd[1]: Finished audit-rules.service. Dec 13 14:25:21.026000 audit[1554]: USER_END pid=1554 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:25:21.073857 kernel: audit: type=1130 audit(1734099921.025:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:21.074007 kernel: audit: type=1106 audit(1734099921.026:143): pid=1554 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:25:21.074089 kernel: audit: type=1104 audit(1734099921.026:144): pid=1554 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:25:21.026000 audit[1554]: CRED_DISP pid=1554 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:25:21.077269 sshd[1550]: pam_unix(sshd:session): session closed for user core Dec 13 14:25:21.082758 systemd-logind[1316]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:25:21.085270 systemd[1]: sshd@5-10.128.0.74:22-139.178.68.195:45584.service: Deactivated successfully. Dec 13 14:25:21.086520 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:25:21.088571 systemd-logind[1316]: Removed session 6. Dec 13 14:25:21.100392 kernel: audit: type=1106 audit(1734099921.073:145): pid=1550 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:25:21.073000 audit[1550]: USER_END pid=1550 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:25:21.073000 audit[1550]: CRED_DISP pid=1550 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:25:21.134015 systemd[1]: Started sshd@6-10.128.0.74:22-139.178.68.195:45600.service. Dec 13 14:25:21.153414 kernel: audit: type=1104 audit(1734099921.073:146): pid=1550 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:25:21.155814 kernel: audit: type=1131 audit(1734099921.084:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.74:22-139.178.68.195:45584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:21.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.74:22-139.178.68.195:45584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:21.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.74:22-139.178.68.195:45600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:21.431000 audit[1583]: USER_ACCT pid=1583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:25:21.431927 sshd[1583]: Accepted publickey for core from 139.178.68.195 port 45600 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:25:21.432000 audit[1583]: CRED_ACQ pid=1583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:25:21.432000 audit[1583]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe48eee970 a2=3 a3=0 items=0 ppid=1 pid=1583 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:21.432000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:25:21.433816 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:21.440573 systemd[1]: Started session-7.scope. Dec 13 14:25:21.441083 systemd-logind[1316]: New session 7 of user core. Dec 13 14:25:21.450000 audit[1583]: USER_START pid=1583 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:25:21.453000 audit[1586]: CRED_ACQ pid=1586 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:25:21.609000 audit[1587]: USER_ACCT pid=1587 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:25:21.609778 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:25:21.609000 audit[1587]: CRED_REFR pid=1587 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:25:21.610208 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:25:21.612000 audit[1587]: USER_START pid=1587 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:25:21.642953 systemd[1]: Starting docker.service... Dec 13 14:25:21.692109 env[1597]: time="2024-12-13T14:25:21.691973018Z" level=info msg="Starting up" Dec 13 14:25:21.694467 env[1597]: time="2024-12-13T14:25:21.694422480Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:25:21.694467 env[1597]: time="2024-12-13T14:25:21.694458407Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:25:21.694656 env[1597]: time="2024-12-13T14:25:21.694486396Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:25:21.694656 env[1597]: time="2024-12-13T14:25:21.694501855Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:25:21.696894 env[1597]: time="2024-12-13T14:25:21.696861349Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:25:21.697183 env[1597]: time="2024-12-13T14:25:21.697158884Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:25:21.697300 env[1597]: time="2024-12-13T14:25:21.697281161Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:25:21.697450 env[1597]: time="2024-12-13T14:25:21.697412182Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:25:21.709544 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2942493361-merged.mount: Deactivated successfully. Dec 13 14:25:22.203788 env[1597]: time="2024-12-13T14:25:22.203724476Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 14:25:22.203788 env[1597]: time="2024-12-13T14:25:22.203759452Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 14:25:22.204137 env[1597]: time="2024-12-13T14:25:22.204105597Z" level=info msg="Loading containers: start." Dec 13 14:25:22.293000 audit[1627]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1627 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.293000 audit[1627]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc67c5fbc0 a2=0 a3=7ffc67c5fbac items=0 ppid=1597 pid=1627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.293000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 14:25:22.296000 audit[1629]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1629 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.296000 audit[1629]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffa8770c30 a2=0 a3=7fffa8770c1c items=0 ppid=1597 pid=1629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.296000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 14:25:22.300000 audit[1631]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.300000 audit[1631]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff32f43a70 a2=0 a3=7fff32f43a5c items=0 ppid=1597 pid=1631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.300000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 14:25:22.303000 audit[1633]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1633 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.303000 audit[1633]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc9b8eb2d0 a2=0 a3=7ffc9b8eb2bc items=0 ppid=1597 pid=1633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.303000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 14:25:22.308000 audit[1635]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.308000 audit[1635]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffec9f16480 a2=0 a3=7ffec9f1646c items=0 ppid=1597 pid=1635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.308000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Dec 13 14:25:22.331000 audit[1640]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1640 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.331000 audit[1640]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcacf5bf50 a2=0 a3=7ffcacf5bf3c items=0 ppid=1597 pid=1640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.331000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Dec 13 14:25:22.343000 audit[1642]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1642 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.343000 audit[1642]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc5fd33260 a2=0 a3=7ffc5fd3324c items=0 ppid=1597 pid=1642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.343000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 14:25:22.347000 audit[1644]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1644 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.347000 audit[1644]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc2c4b8b00 a2=0 a3=7ffc2c4b8aec items=0 ppid=1597 pid=1644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.347000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 14:25:22.350000 audit[1646]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1646 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.350000 audit[1646]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffc77af3770 a2=0 a3=7ffc77af375c items=0 ppid=1597 pid=1646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.350000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:25:22.365000 audit[1650]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1650 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.365000 audit[1650]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc29e7f4c0 a2=0 a3=7ffc29e7f4ac items=0 ppid=1597 pid=1650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.365000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:25:22.370000 audit[1651]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1651 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.370000 audit[1651]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd0f41f780 a2=0 a3=7ffd0f41f76c items=0 ppid=1597 pid=1651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.370000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:25:22.389384 kernel: Initializing XFRM netlink socket Dec 13 14:25:22.435689 env[1597]: time="2024-12-13T14:25:22.435625026Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:25:22.468000 audit[1659]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1659 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.468000 audit[1659]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffcd46cf750 a2=0 a3=7ffcd46cf73c items=0 ppid=1597 pid=1659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.468000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 14:25:22.483000 audit[1662]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1662 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.483000 audit[1662]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd5633bb80 a2=0 a3=7ffd5633bb6c items=0 ppid=1597 pid=1662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.483000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 14:25:22.487000 audit[1666]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1666 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.487000 audit[1666]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd413939e0 a2=0 a3=7ffd413939cc items=0 ppid=1597 pid=1666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.487000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Dec 13 14:25:22.491000 audit[1668]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1668 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.491000 audit[1668]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc1c6ba290 a2=0 a3=7ffc1c6ba27c items=0 ppid=1597 pid=1668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.491000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Dec 13 14:25:22.493000 audit[1670]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1670 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.493000 audit[1670]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffe683d2020 a2=0 a3=7ffe683d200c items=0 ppid=1597 pid=1670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.493000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 14:25:22.496000 audit[1672]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1672 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.496000 audit[1672]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffe1941acf0 a2=0 a3=7ffe1941acdc items=0 ppid=1597 pid=1672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.496000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 14:25:22.499000 audit[1674]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1674 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.499000 audit[1674]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffe0885b130 a2=0 a3=7ffe0885b11c items=0 ppid=1597 pid=1674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.499000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Dec 13 14:25:22.512000 audit[1677]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1677 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.512000 audit[1677]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe78a7dab0 a2=0 a3=7ffe78a7da9c items=0 ppid=1597 pid=1677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.512000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 14:25:22.517000 audit[1679]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1679 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.517000 audit[1679]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffeaa561ca0 a2=0 a3=7ffeaa561c8c items=0 ppid=1597 pid=1679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.517000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 14:25:22.520000 audit[1681]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1681 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.520000 audit[1681]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc82015fb0 a2=0 a3=7ffc82015f9c items=0 ppid=1597 pid=1681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.520000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 14:25:22.524000 audit[1683]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1683 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.524000 audit[1683]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdeac2dba0 a2=0 a3=7ffdeac2db8c items=0 ppid=1597 pid=1683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.524000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 14:25:22.526403 systemd-networkd[1078]: docker0: Link UP Dec 13 14:25:22.538000 audit[1687]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1687 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.538000 audit[1687]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd2562dbe0 a2=0 a3=7ffd2562dbcc items=0 ppid=1597 pid=1687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.538000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:25:22.544000 audit[1688]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1688 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:22.544000 audit[1688]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdc1147820 a2=0 a3=7ffdc114780c items=0 ppid=1597 pid=1688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:22.544000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:25:22.546478 env[1597]: time="2024-12-13T14:25:22.546426340Z" level=info msg="Loading containers: done." Dec 13 14:25:22.571089 env[1597]: time="2024-12-13T14:25:22.571012958Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:25:22.571391 env[1597]: time="2024-12-13T14:25:22.571314548Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:25:22.571562 env[1597]: time="2024-12-13T14:25:22.571507015Z" level=info msg="Daemon has completed initialization" Dec 13 14:25:22.595587 systemd[1]: Started docker.service. Dec 13 14:25:22.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:22.607032 env[1597]: time="2024-12-13T14:25:22.606945060Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:25:22.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:22.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:22.840132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:25:22.840475 systemd[1]: Stopped kubelet.service. Dec 13 14:25:22.843005 systemd[1]: Starting kubelet.service... Dec 13 14:25:23.069041 systemd[1]: Started kubelet.service. Dec 13 14:25:23.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:23.225264 kubelet[1727]: E1213 14:25:23.225092 1727 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:25:23.231593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:25:23.231872 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:25:23.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:25:23.911001 env[1331]: time="2024-12-13T14:25:23.910916409Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:25:24.456701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1855734690.mount: Deactivated successfully. Dec 13 14:25:26.496202 env[1331]: time="2024-12-13T14:25:26.496122028Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:26.499312 env[1331]: time="2024-12-13T14:25:26.499261631Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:26.501836 env[1331]: time="2024-12-13T14:25:26.501787630Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:26.504465 env[1331]: time="2024-12-13T14:25:26.504423839Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:26.505463 env[1331]: time="2024-12-13T14:25:26.505410737Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 14:25:26.520704 env[1331]: time="2024-12-13T14:25:26.520653530Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:25:28.594237 env[1331]: time="2024-12-13T14:25:28.594162169Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:28.597675 env[1331]: time="2024-12-13T14:25:28.597624751Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:28.600670 env[1331]: time="2024-12-13T14:25:28.600600935Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:28.604915 env[1331]: time="2024-12-13T14:25:28.604848992Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:28.605439 env[1331]: time="2024-12-13T14:25:28.605390944Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 14:25:28.620582 env[1331]: time="2024-12-13T14:25:28.620535976Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:25:29.957013 env[1331]: time="2024-12-13T14:25:29.956940512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:29.959935 env[1331]: time="2024-12-13T14:25:29.959880648Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:29.962479 env[1331]: time="2024-12-13T14:25:29.962436832Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:29.964819 env[1331]: time="2024-12-13T14:25:29.964781255Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:29.965791 env[1331]: time="2024-12-13T14:25:29.965739082Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 14:25:29.979966 env[1331]: time="2024-12-13T14:25:29.979916625Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:25:31.031165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1435318702.mount: Deactivated successfully. Dec 13 14:25:31.719930 env[1331]: time="2024-12-13T14:25:31.719853987Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:31.723248 env[1331]: time="2024-12-13T14:25:31.723197010Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:31.725472 env[1331]: time="2024-12-13T14:25:31.725428725Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:31.727838 env[1331]: time="2024-12-13T14:25:31.727789517Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:31.728447 env[1331]: time="2024-12-13T14:25:31.728372193Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:25:31.743673 env[1331]: time="2024-12-13T14:25:31.743620929Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:25:32.156607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount713228947.mount: Deactivated successfully. Dec 13 14:25:33.337116 env[1331]: time="2024-12-13T14:25:33.337040506Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:33.340069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:25:33.357139 kernel: kauditd_printk_skb: 88 callbacks suppressed Dec 13 14:25:33.357209 kernel: audit: type=1130 audit(1734099933.339:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.357339 env[1331]: time="2024-12-13T14:25:33.341404098Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:33.357339 env[1331]: time="2024-12-13T14:25:33.344712522Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:33.357339 env[1331]: time="2024-12-13T14:25:33.349422582Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:33.357339 env[1331]: time="2024-12-13T14:25:33.350895378Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:25:33.340409 systemd[1]: Stopped kubelet.service. Dec 13 14:25:33.343206 systemd[1]: Starting kubelet.service... Dec 13 14:25:33.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.401419 env[1331]: time="2024-12-13T14:25:33.401061799Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:25:33.406383 kernel: audit: type=1131 audit(1734099933.339:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.569101 systemd[1]: Started kubelet.service. Dec 13 14:25:33.591385 kernel: audit: type=1130 audit(1734099933.568:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.645054 kubelet[1779]: E1213 14:25:33.644978 1779 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:25:33.647643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:25:33.647936 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:25:33.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:25:33.670411 kernel: audit: type=1131 audit(1734099933.647:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:25:33.986876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3112646.mount: Deactivated successfully. Dec 13 14:25:33.993669 env[1331]: time="2024-12-13T14:25:33.993604895Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:33.996464 env[1331]: time="2024-12-13T14:25:33.996415292Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:33.999020 env[1331]: time="2024-12-13T14:25:33.998948041Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:34.001382 env[1331]: time="2024-12-13T14:25:34.001317878Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:34.002258 env[1331]: time="2024-12-13T14:25:34.002204798Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 14:25:34.016769 env[1331]: time="2024-12-13T14:25:34.016719224Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:25:34.493087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1613398470.mount: Deactivated successfully. Dec 13 14:25:37.154105 env[1331]: time="2024-12-13T14:25:37.154021504Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:37.157490 env[1331]: time="2024-12-13T14:25:37.157439597Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:37.160326 env[1331]: time="2024-12-13T14:25:37.160279218Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:37.168125 env[1331]: time="2024-12-13T14:25:37.168072199Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:37.169130 env[1331]: time="2024-12-13T14:25:37.169069150Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 14:25:40.027987 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:25:40.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.051419 kernel: audit: type=1131 audit(1734099940.028:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.441935 systemd[1]: Stopped kubelet.service. Dec 13 14:25:40.465306 kernel: audit: type=1130 audit(1734099940.440:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.445972 systemd[1]: Starting kubelet.service... Dec 13 14:25:40.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.486474 systemd[1]: Reloading. Dec 13 14:25:40.498373 kernel: audit: type=1131 audit(1734099940.440:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.608556 /usr/lib/systemd/system-generators/torcx-generator[1884]: time="2024-12-13T14:25:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:25:40.613091 /usr/lib/systemd/system-generators/torcx-generator[1884]: time="2024-12-13T14:25:40Z" level=info msg="torcx already run" Dec 13 14:25:40.764725 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:25:40.764752 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:25:40.791363 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:25:40.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.905301 systemd[1]: Started kubelet.service. Dec 13 14:25:40.929630 kernel: audit: type=1130 audit(1734099940.904:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.931300 systemd[1]: Stopping kubelet.service... Dec 13 14:25:40.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.933413 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:25:40.933797 systemd[1]: Stopped kubelet.service. Dec 13 14:25:40.937190 systemd[1]: Starting kubelet.service... Dec 13 14:25:40.957379 kernel: audit: type=1131 audit(1734099940.932:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:41.157102 systemd[1]: Started kubelet.service. Dec 13 14:25:41.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:41.183412 kernel: audit: type=1130 audit(1734099941.159:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:41.265504 kubelet[1946]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:25:41.265962 kubelet[1946]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:25:41.266040 kubelet[1946]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:25:41.266217 kubelet[1946]: I1213 14:25:41.266177 1946 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:25:42.469403 kubelet[1946]: I1213 14:25:42.469347 1946 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:25:42.469921 kubelet[1946]: I1213 14:25:42.469872 1946 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:25:42.470255 kubelet[1946]: I1213 14:25:42.470216 1946 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:25:42.509820 kubelet[1946]: E1213 14:25:42.509751 1946 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.74:6443: connect: connection refused Dec 13 14:25:42.512628 kubelet[1946]: I1213 14:25:42.512586 1946 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:25:42.527925 kubelet[1946]: I1213 14:25:42.527889 1946 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:25:42.528738 kubelet[1946]: I1213 14:25:42.528700 1946 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:25:42.529000 kubelet[1946]: I1213 14:25:42.528965 1946 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:25:42.529199 kubelet[1946]: I1213 14:25:42.529005 1946 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:25:42.529199 kubelet[1946]: I1213 14:25:42.529029 1946 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:25:42.530870 kubelet[1946]: I1213 14:25:42.530830 1946 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:25:42.531051 kubelet[1946]: I1213 14:25:42.531020 1946 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:25:42.531051 kubelet[1946]: I1213 14:25:42.531045 1946 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:25:42.531176 kubelet[1946]: I1213 14:25:42.531089 1946 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:25:42.531176 kubelet[1946]: I1213 14:25:42.531111 1946 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:25:42.542549 kubelet[1946]: I1213 14:25:42.542519 1946 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:25:42.543396 kubelet[1946]: W1213 14:25:42.543289 1946 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Dec 13 14:25:42.543396 kubelet[1946]: E1213 14:25:42.543393 1946 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Dec 13 14:25:42.543605 kubelet[1946]: W1213 14:25:42.543496 1946 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Dec 13 14:25:42.543605 kubelet[1946]: E1213 14:25:42.543557 1946 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Dec 13 14:25:42.547581 kubelet[1946]: I1213 14:25:42.547524 1946 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:25:42.547747 kubelet[1946]: W1213 14:25:42.547638 1946 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:25:42.548572 kubelet[1946]: I1213 14:25:42.548543 1946 server.go:1256] "Started kubelet" Dec 13 14:25:42.558000 audit[1946]: AVC avc: denied { mac_admin } for pid=1946 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:25:42.571840 kubelet[1946]: I1213 14:25:42.570988 1946 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:25:42.558000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:25:42.590606 kubelet[1946]: I1213 14:25:42.589323 1946 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:25:42.590606 kubelet[1946]: I1213 14:25:42.589784 1946 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:25:42.593026 kernel: audit: type=1400 audit(1734099942.558:196): avc: denied { mac_admin } for pid=1946 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:25:42.593139 kernel: audit: type=1401 audit(1734099942.558:196): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:25:42.593182 kernel: audit: type=1300 audit(1734099942.558:196): arch=c000003e syscall=188 success=no exit=-22 a0=c000bd1200 a1=c000bd4a80 a2=c000bd11d0 a3=25 items=0 ppid=1 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:42.558000 audit[1946]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bd1200 a1=c000bd4a80 a2=c000bd11d0 a3=25 items=0 ppid=1 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:42.596133 kubelet[1946]: I1213 14:25:42.596099 1946 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:25:42.558000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:25:42.653317 kernel: audit: type=1327 audit(1734099942.558:196): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:25:42.656113 kubelet[1946]: I1213 14:25:42.656070 1946 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 14:25:42.654000 audit[1946]: AVC avc: denied { mac_admin } for pid=1946 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:25:42.654000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:25:42.654000 audit[1946]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000cfe380 a1=c000c6ad80 a2=c000c62e40 a3=25 items=0 ppid=1 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:42.654000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:25:42.656832 kubelet[1946]: I1213 14:25:42.656790 1946 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 14:25:42.656988 kubelet[1946]: I1213 14:25:42.656957 1946 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:25:42.658537 kubelet[1946]: E1213 14:25:42.658497 1946 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal.1810c2b5c8a106f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal,UID:ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 14:25:42.548506354 +0000 UTC m=+1.377163106,LastTimestamp:2024-12-13 14:25:42.548506354 +0000 UTC m=+1.377163106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal,}" Dec 13 14:25:42.661765 kubelet[1946]: I1213 14:25:42.661722 1946 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:25:42.663900 kubelet[1946]: I1213 14:25:42.663872 1946 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:25:42.664172 kubelet[1946]: I1213 14:25:42.664141 1946 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:25:42.665473 kubelet[1946]: W1213 14:25:42.665413 1946 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Dec 13 14:25:42.665639 kubelet[1946]: E1213 14:25:42.665619 1946 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Dec 13 14:25:42.665900 kubelet[1946]: E1213 14:25:42.665881 1946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.74:6443: connect: connection refused" interval="200ms" Dec 13 14:25:42.665000 audit[1957]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:42.665000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff8222ed30 a2=0 a3=7fff8222ed1c items=0 ppid=1946 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:42.665000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 14:25:42.667716 kubelet[1946]: I1213 14:25:42.667694 1946 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:25:42.667933 kubelet[1946]: I1213 14:25:42.667907 1946 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:25:42.670206 kubelet[1946]: I1213 14:25:42.670183 1946 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:25:42.669000 audit[1958]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:42.669000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe56c3d9d0 a2=0 a3=7ffe56c3d9bc items=0 ppid=1946 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:42.669000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 14:25:42.672998 kubelet[1946]: E1213 14:25:42.672967 1946 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:25:42.673000 audit[1960]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:42.673000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff08415200 a2=0 a3=7fff084151ec items=0 ppid=1946 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:42.673000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:25:42.677000 audit[1962]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:42.677000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe4ecd4b20 a2=0 a3=7ffe4ecd4b0c items=0 ppid=1946 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:42.677000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:25:42.699000 audit[1966]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:42.699000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff566f96a0 a2=0 a3=7fff566f968c items=0 ppid=1946 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:42.699000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 13 14:25:42.701557 kubelet[1946]: I1213 14:25:42.701526 1946 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:25:42.702000 audit[1967]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:25:42.702000 audit[1967]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdd80dc280 a2=0 a3=7ffdd80dc26c items=0 ppid=1946 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:42.702000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 14:25:42.704403 kubelet[1946]: I1213 14:25:42.704335 1946 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:25:42.704533 kubelet[1946]: I1213 14:25:42.704409 1946 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:25:42.704533 kubelet[1946]: I1213 14:25:42.704441 1946 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:25:42.704533 kubelet[1946]: E1213 14:25:42.704519 1946 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:25:42.704000 audit[1968]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:42.704000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff6fe01a30 a2=0 a3=7fff6fe01a1c items=0 ppid=1946 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:42.704000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 14:25:42.706000 audit[1969]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:42.706000 audit[1969]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb14ab800 a2=0 a3=7ffcb14ab7ec items=0 ppid=1946 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:42.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 14:25:42.708000 audit[1970]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:25:42.708000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc477c99b0 a2=0 a3=7ffc477c999c items=0 ppid=1946 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:42.708000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 14:25:42.709000 audit[1971]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:25:42.709000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd892ef580 a2=0 a3=7ffd892ef56c items=0 ppid=1946 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:42.709000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 14:25:42.712000 audit[1972]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:25:42.712000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffd945e6460 a2=0 a3=7ffd945e644c items=0 ppid=1946 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:42.712000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 14:25:42.714755 kubelet[1946]: W1213 14:25:42.714717 1946 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Dec 13 14:25:42.714901 kubelet[1946]: E1213 14:25:42.714773 1946 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Dec 13 14:25:42.716034 kubelet[1946]: I1213 14:25:42.716004 1946 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:25:42.716034 kubelet[1946]: I1213 14:25:42.716036 1946 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:25:42.716187 kubelet[1946]: I1213 14:25:42.716072 1946 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:25:42.715000 audit[1973]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:25:42.715000 audit[1973]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe0fdd1490 a2=0 a3=7ffe0fdd147c items=0 ppid=1946 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:42.715000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 14:25:42.718484 kubelet[1946]: I1213 14:25:42.718446 1946 policy_none.go:49] "None policy: Start" Dec 13 14:25:42.719326 kubelet[1946]: I1213 14:25:42.719305 1946 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:25:42.721416 kubelet[1946]: I1213 14:25:42.719476 1946 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:25:42.726213 kubelet[1946]: I1213 14:25:42.726164 1946 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:25:42.724000 audit[1946]: AVC avc: denied { mac_admin } for pid=1946 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:25:42.724000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:25:42.724000 audit[1946]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f81110 a1=c000f5d650 a2=c000f810e0 a3=25 items=0 ppid=1 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:42.724000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:25:42.726702 kubelet[1946]: I1213 14:25:42.726277 1946 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 14:25:42.726702 kubelet[1946]: I1213 14:25:42.726566 1946 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:25:42.733195 kubelet[1946]: E1213 14:25:42.733150 1946 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" not found" Dec 13 14:25:42.770972 kubelet[1946]: I1213 14:25:42.770934 1946 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:42.771421 kubelet[1946]: E1213 14:25:42.771395 1946 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.74:6443/api/v1/nodes\": dial tcp 10.128.0.74:6443: connect: connection refused" node="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:42.804743 kubelet[1946]: I1213 14:25:42.804674 1946 topology_manager.go:215] "Topology Admit Handler" podUID="78de7dfda62b37103e2e7229f037b45e" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:42.811678 kubelet[1946]: I1213 14:25:42.811642 1946 topology_manager.go:215] "Topology Admit Handler" podUID="e6a85123de15b792572c42bd569b334a" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:42.817451 kubelet[1946]: I1213 14:25:42.817414 1946 topology_manager.go:215] "Topology Admit Handler" podUID="4336f584ee556b3287a91c2b2a134107" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:42.865709 kubelet[1946]: I1213 14:25:42.865661 1946 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78de7dfda62b37103e2e7229f037b45e-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"78de7dfda62b37103e2e7229f037b45e\") " pod="kube-system/kube-scheduler-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:42.866031 kubelet[1946]: I1213 14:25:42.866008 1946 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4336f584ee556b3287a91c2b2a134107-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"4336f584ee556b3287a91c2b2a134107\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:42.866204 kubelet[1946]: I1213 14:25:42.866185 1946 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4336f584ee556b3287a91c2b2a134107-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"4336f584ee556b3287a91c2b2a134107\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:42.866394 kubelet[1946]: I1213 14:25:42.866375 1946 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6a85123de15b792572c42bd569b334a-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"e6a85123de15b792572c42bd569b334a\") " pod="kube-system/kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:42.866940 kubelet[1946]: I1213 14:25:42.866556 1946 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6a85123de15b792572c42bd569b334a-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"e6a85123de15b792572c42bd569b334a\") " pod="kube-system/kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:42.867075 kubelet[1946]: I1213 14:25:42.867062 1946 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6a85123de15b792572c42bd569b334a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"e6a85123de15b792572c42bd569b334a\") " pod="kube-system/kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:42.867206 kubelet[1946]: I1213 14:25:42.867192 1946 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4336f584ee556b3287a91c2b2a134107-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"4336f584ee556b3287a91c2b2a134107\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:42.867409 kubelet[1946]: I1213 14:25:42.867341 1946 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4336f584ee556b3287a91c2b2a134107-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"4336f584ee556b3287a91c2b2a134107\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:42.867586 kubelet[1946]: E1213 14:25:42.867569 1946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.74:6443: connect: connection refused" interval="400ms" Dec 13 14:25:42.867698 kubelet[1946]: I1213 14:25:42.867628 1946 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4336f584ee556b3287a91c2b2a134107-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"4336f584ee556b3287a91c2b2a134107\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:42.978090 kubelet[1946]: I1213 14:25:42.977971 1946 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:42.978701 kubelet[1946]: E1213 14:25:42.978676 1946 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.74:6443/api/v1/nodes\": dial tcp 10.128.0.74:6443: connect: connection refused" node="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:43.122013 env[1331]: time="2024-12-13T14:25:43.121931487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal,Uid:78de7dfda62b37103e2e7229f037b45e,Namespace:kube-system,Attempt:0,}" Dec 13 14:25:43.129760 env[1331]: time="2024-12-13T14:25:43.129701861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal,Uid:e6a85123de15b792572c42bd569b334a,Namespace:kube-system,Attempt:0,}" Dec 13 14:25:43.135062 env[1331]: time="2024-12-13T14:25:43.135010671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal,Uid:4336f584ee556b3287a91c2b2a134107,Namespace:kube-system,Attempt:0,}" Dec 13 14:25:43.268967 kubelet[1946]: E1213 14:25:43.268835 1946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.74:6443: connect: connection refused" interval="800ms" Dec 13 14:25:43.384788 kubelet[1946]: I1213 14:25:43.384739 1946 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:43.385261 kubelet[1946]: E1213 14:25:43.385229 1946 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.74:6443/api/v1/nodes\": dial tcp 10.128.0.74:6443: connect: connection refused" node="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:43.499618 kubelet[1946]: W1213 14:25:43.499561 1946 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Dec 13 14:25:43.499618 kubelet[1946]: E1213 14:25:43.499619 1946 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Dec 13 14:25:43.539982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2116542437.mount: Deactivated successfully. Dec 13 14:25:43.554072 env[1331]: time="2024-12-13T14:25:43.554014766Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:43.555503 env[1331]: time="2024-12-13T14:25:43.555454761Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:43.559752 env[1331]: time="2024-12-13T14:25:43.559705577Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:43.561558 env[1331]: time="2024-12-13T14:25:43.561486465Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:43.562717 env[1331]: time="2024-12-13T14:25:43.562663633Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:43.571856 env[1331]: time="2024-12-13T14:25:43.571808393Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:43.576015 env[1331]: time="2024-12-13T14:25:43.575947240Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:43.577297 env[1331]: time="2024-12-13T14:25:43.577214816Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:43.578733 env[1331]: time="2024-12-13T14:25:43.578682401Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:43.581222 env[1331]: time="2024-12-13T14:25:43.581171467Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:43.583493 env[1331]: time="2024-12-13T14:25:43.583448517Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:43.585019 env[1331]: time="2024-12-13T14:25:43.584974090Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:43.627682 env[1331]: time="2024-12-13T14:25:43.627535852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:43.627682 env[1331]: time="2024-12-13T14:25:43.627623244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:43.628160 env[1331]: time="2024-12-13T14:25:43.627642897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:43.628529 env[1331]: time="2024-12-13T14:25:43.628423192Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/41be0f80c8dc44107c75eb47a2db86e13e4baf491c7d66a33fa696471fdd4b54 pid=1984 runtime=io.containerd.runc.v2 Dec 13 14:25:43.642344 env[1331]: time="2024-12-13T14:25:43.642197158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:43.642344 env[1331]: time="2024-12-13T14:25:43.642255254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:43.642344 env[1331]: time="2024-12-13T14:25:43.642274287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:43.642718 env[1331]: time="2024-12-13T14:25:43.642505660Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7aa4c6effd0e76b69e15bcf4a9225fc9bddc84501289dbd4b3d475dc24fbb645 pid=2003 runtime=io.containerd.runc.v2 Dec 13 14:25:43.652864 env[1331]: time="2024-12-13T14:25:43.652642780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:43.653028 env[1331]: time="2024-12-13T14:25:43.652891130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:43.653028 env[1331]: time="2024-12-13T14:25:43.652996370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:43.653461 env[1331]: time="2024-12-13T14:25:43.653399202Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0023d50bc3b5700dfef850df48eb3034adae374729bb352f473d3eb6e1e0c279 pid=2012 runtime=io.containerd.runc.v2 Dec 13 14:25:43.803727 env[1331]: time="2024-12-13T14:25:43.803568268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal,Uid:4336f584ee556b3287a91c2b2a134107,Namespace:kube-system,Attempt:0,} returns sandbox id \"41be0f80c8dc44107c75eb47a2db86e13e4baf491c7d66a33fa696471fdd4b54\"" Dec 13 14:25:43.808136 kubelet[1946]: E1213 14:25:43.808089 1946 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flat" Dec 13 14:25:43.811643 env[1331]: time="2024-12-13T14:25:43.811576282Z" level=info msg="CreateContainer within sandbox \"41be0f80c8dc44107c75eb47a2db86e13e4baf491c7d66a33fa696471fdd4b54\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:25:43.817531 env[1331]: time="2024-12-13T14:25:43.817473829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal,Uid:e6a85123de15b792572c42bd569b334a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7aa4c6effd0e76b69e15bcf4a9225fc9bddc84501289dbd4b3d475dc24fbb645\"" Dec 13 14:25:43.819629 kubelet[1946]: E1213 14:25:43.819595 1946 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-21291" Dec 13 14:25:43.832671 env[1331]: time="2024-12-13T14:25:43.830834832Z" level=info msg="CreateContainer within sandbox \"7aa4c6effd0e76b69e15bcf4a9225fc9bddc84501289dbd4b3d475dc24fbb645\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:25:43.837695 env[1331]: time="2024-12-13T14:25:43.837633076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal,Uid:78de7dfda62b37103e2e7229f037b45e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0023d50bc3b5700dfef850df48eb3034adae374729bb352f473d3eb6e1e0c279\"" Dec 13 14:25:43.839390 kubelet[1946]: E1213 14:25:43.839332 1946 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-21291" Dec 13 14:25:43.841591 env[1331]: time="2024-12-13T14:25:43.841541103Z" level=info msg="CreateContainer within sandbox \"0023d50bc3b5700dfef850df48eb3034adae374729bb352f473d3eb6e1e0c279\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:25:43.849751 env[1331]: time="2024-12-13T14:25:43.849672482Z" level=info msg="CreateContainer within sandbox \"41be0f80c8dc44107c75eb47a2db86e13e4baf491c7d66a33fa696471fdd4b54\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ab51b66a8e5fbdf3ea437deaefadbf30a8e2345fc479d6bb689baa755e1ca4ce\"" Dec 13 14:25:43.850788 env[1331]: time="2024-12-13T14:25:43.850754016Z" level=info msg="StartContainer for \"ab51b66a8e5fbdf3ea437deaefadbf30a8e2345fc479d6bb689baa755e1ca4ce\"" Dec 13 14:25:43.865393 env[1331]: time="2024-12-13T14:25:43.865317574Z" level=info msg="CreateContainer within sandbox \"7aa4c6effd0e76b69e15bcf4a9225fc9bddc84501289dbd4b3d475dc24fbb645\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bce2bd532ddb02384e5cc5f3da8e0de3846738348271b4576b66a1cd3bfd45c8\"" Dec 13 14:25:43.866172 env[1331]: time="2024-12-13T14:25:43.866116810Z" level=info msg="StartContainer for \"bce2bd532ddb02384e5cc5f3da8e0de3846738348271b4576b66a1cd3bfd45c8\"" Dec 13 14:25:43.869422 kubelet[1946]: W1213 14:25:43.869305 1946 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Dec 13 14:25:43.869585 kubelet[1946]: E1213 14:25:43.869459 1946 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Dec 13 14:25:43.872222 env[1331]: time="2024-12-13T14:25:43.872162513Z" level=info msg="CreateContainer within sandbox \"0023d50bc3b5700dfef850df48eb3034adae374729bb352f473d3eb6e1e0c279\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bf5099e5354d6c8fc86768116b228a7a0f9bf0e40b74b11d5361502ed82aab55\"" Dec 13 14:25:43.872867 env[1331]: time="2024-12-13T14:25:43.872826934Z" level=info msg="StartContainer for \"bf5099e5354d6c8fc86768116b228a7a0f9bf0e40b74b11d5361502ed82aab55\"" Dec 13 14:25:43.934973 kubelet[1946]: W1213 14:25:43.934894 1946 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Dec 13 14:25:43.934973 kubelet[1946]: E1213 14:25:43.934981 1946 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Dec 13 14:25:44.041091 env[1331]: time="2024-12-13T14:25:44.041037886Z" level=info msg="StartContainer for \"ab51b66a8e5fbdf3ea437deaefadbf30a8e2345fc479d6bb689baa755e1ca4ce\" returns successfully" Dec 13 14:25:44.070897 kubelet[1946]: E1213 14:25:44.069893 1946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.74:6443: connect: connection refused" interval="1.6s" Dec 13 14:25:44.082643 env[1331]: time="2024-12-13T14:25:44.082579874Z" level=info msg="StartContainer for \"bf5099e5354d6c8fc86768116b228a7a0f9bf0e40b74b11d5361502ed82aab55\" returns successfully" Dec 13 14:25:44.112652 env[1331]: time="2024-12-13T14:25:44.112583597Z" level=info msg="StartContainer for \"bce2bd532ddb02384e5cc5f3da8e0de3846738348271b4576b66a1cd3bfd45c8\" returns successfully" Dec 13 14:25:44.128985 kubelet[1946]: W1213 14:25:44.128873 1946 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Dec 13 14:25:44.128985 kubelet[1946]: E1213 14:25:44.129001 1946 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Dec 13 14:25:44.190932 kubelet[1946]: I1213 14:25:44.190885 1946 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:44.191326 kubelet[1946]: E1213 14:25:44.191297 1946 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.74:6443/api/v1/nodes\": dial tcp 10.128.0.74:6443: connect: connection refused" node="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:45.799282 kubelet[1946]: I1213 14:25:45.799225 1946 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:47.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.74:22-103.219.154.67:33658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:47.420294 systemd[1]: Started sshd@7-10.128.0.74:22-103.219.154.67:33658.service. Dec 13 14:25:47.426144 kernel: kauditd_printk_skb: 44 callbacks suppressed Dec 13 14:25:47.426271 kernel: audit: type=1130 audit(1734099947.419:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.74:22-103.219.154.67:33658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:47.518659 kubelet[1946]: E1213 14:25:47.518604 1946 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" not found" node="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:47.536933 kubelet[1946]: I1213 14:25:47.536879 1946 apiserver.go:52] "Watching apiserver" Dec 13 14:25:47.557929 kubelet[1946]: I1213 14:25:47.557887 1946 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:47.564296 kubelet[1946]: I1213 14:25:47.564217 1946 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:25:47.597443 kubelet[1946]: E1213 14:25:47.597393 1946 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal.1810c2b5c8a106f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal,UID:ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 14:25:42.548506354 +0000 UTC m=+1.377163106,LastTimestamp:2024-12-13 14:25:42.548506354 +0000 UTC m=+1.377163106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal,}" Dec 13 14:25:48.153411 sshd[2208]: Invalid user api_user from 103.219.154.67 port 33658 Dec 13 14:25:48.163000 audit[2208]: USER_AUTH pid=2208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="api_user" exe="/usr/sbin/sshd" hostname=103.219.154.67 addr=103.219.154.67 terminal=ssh res=failed' Dec 13 14:25:48.190266 sshd[2208]: Failed password for invalid user api_user from 103.219.154.67 port 33658 ssh2 Dec 13 14:25:48.190390 kernel: audit: type=1100 audit(1734099948.163:212): pid=2208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="api_user" exe="/usr/sbin/sshd" hostname=103.219.154.67 addr=103.219.154.67 terminal=ssh res=failed' Dec 13 14:25:48.299075 sshd[2208]: Received disconnect from 103.219.154.67 port 33658:11: Bye Bye [preauth] Dec 13 14:25:48.299292 sshd[2208]: Disconnected from invalid user api_user 103.219.154.67 port 33658 [preauth] Dec 13 14:25:48.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.74:22-103.219.154.67:33658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:48.301214 systemd[1]: sshd@7-10.128.0.74:22-103.219.154.67:33658.service: Deactivated successfully. Dec 13 14:25:48.326382 kernel: audit: type=1131 audit(1734099948.300:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.74:22-103.219.154.67:33658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:49.714932 kubelet[1946]: W1213 14:25:49.714089 1946 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 14:25:50.370055 systemd[1]: Reloading. Dec 13 14:25:50.492265 /usr/lib/systemd/system-generators/torcx-generator[2234]: time="2024-12-13T14:25:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:25:50.492311 /usr/lib/systemd/system-generators/torcx-generator[2234]: time="2024-12-13T14:25:50Z" level=info msg="torcx already run" Dec 13 14:25:50.612749 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:25:50.612784 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:25:50.640250 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:25:50.778325 kubelet[1946]: I1213 14:25:50.778265 1946 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:25:50.778832 systemd[1]: Stopping kubelet.service... Dec 13 14:25:50.798157 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:25:50.798669 systemd[1]: Stopped kubelet.service. Dec 13 14:25:50.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:50.802054 systemd[1]: Starting kubelet.service... Dec 13 14:25:50.820375 kernel: audit: type=1131 audit(1734099950.798:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:51.046081 systemd[1]: Started kubelet.service. Dec 13 14:25:51.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:51.069516 kernel: audit: type=1130 audit(1734099951.046:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:51.182336 kubelet[2291]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:25:51.182336 kubelet[2291]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:25:51.182336 kubelet[2291]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:25:51.183084 kubelet[2291]: I1213 14:25:51.182446 2291 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:25:51.191538 kubelet[2291]: I1213 14:25:51.191503 2291 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:25:51.191769 kubelet[2291]: I1213 14:25:51.191750 2291 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:25:51.192295 kubelet[2291]: I1213 14:25:51.192273 2291 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:25:51.195132 kubelet[2291]: I1213 14:25:51.195103 2291 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:25:51.199602 kubelet[2291]: I1213 14:25:51.199571 2291 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:25:51.220078 kubelet[2291]: I1213 14:25:51.220040 2291 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:25:51.221294 kubelet[2291]: I1213 14:25:51.221268 2291 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:25:51.221829 kubelet[2291]: I1213 14:25:51.221801 2291 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:25:51.222161 kubelet[2291]: I1213 14:25:51.222141 2291 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:25:51.222277 kubelet[2291]: I1213 14:25:51.222263 2291 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:25:51.222514 kubelet[2291]: I1213 14:25:51.222441 2291 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:25:51.222694 kubelet[2291]: I1213 14:25:51.222672 2291 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:25:51.222787 kubelet[2291]: I1213 14:25:51.222722 2291 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:25:51.222787 kubelet[2291]: I1213 14:25:51.222769 2291 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:25:51.222895 kubelet[2291]: I1213 14:25:51.222815 2291 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:25:51.226632 kubelet[2291]: I1213 14:25:51.226574 2291 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:25:51.226976 kubelet[2291]: I1213 14:25:51.226953 2291 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:25:51.228684 kubelet[2291]: I1213 14:25:51.228656 2291 server.go:1256] "Started kubelet" Dec 13 14:25:51.260396 kernel: audit: type=1400 audit(1734099951.233:216): avc: denied { mac_admin } for pid=2291 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:25:51.260569 kernel: audit: type=1401 audit(1734099951.233:216): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:25:51.233000 audit[2291]: AVC avc: denied { mac_admin } for pid=2291 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:25:51.233000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:25:51.260774 kubelet[2291]: I1213 14:25:51.233484 2291 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 14:25:51.260774 kubelet[2291]: I1213 14:25:51.233535 2291 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 14:25:51.260774 kubelet[2291]: I1213 14:25:51.233591 2291 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:25:51.260774 kubelet[2291]: I1213 14:25:51.242254 2291 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:25:51.260774 kubelet[2291]: I1213 14:25:51.243484 2291 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:25:51.260774 kubelet[2291]: I1213 14:25:51.245594 2291 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:25:51.260774 kubelet[2291]: I1213 14:25:51.245822 2291 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:25:51.260774 kubelet[2291]: I1213 14:25:51.248531 2291 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:25:51.260774 kubelet[2291]: I1213 14:25:51.251934 2291 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:25:51.260774 kubelet[2291]: I1213 14:25:51.252136 2291 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:25:51.260774 kubelet[2291]: I1213 14:25:51.256419 2291 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:25:51.269277 kubelet[2291]: I1213 14:25:51.269244 2291 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:25:51.269539 kubelet[2291]: I1213 14:25:51.269522 2291 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:25:51.269661 kubelet[2291]: I1213 14:25:51.269648 2291 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:25:51.269818 kubelet[2291]: E1213 14:25:51.269805 2291 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:25:51.270527 kubelet[2291]: I1213 14:25:51.270503 2291 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:25:51.270865 kubelet[2291]: I1213 14:25:51.270824 2291 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:25:51.233000 audit[2291]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b52630 a1=c000b78090 a2=c000b52600 a3=25 items=0 ppid=1 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:51.288881 kubelet[2291]: I1213 14:25:51.288850 2291 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:25:51.303445 kernel: audit: type=1300 audit(1734099951.233:216): arch=c000003e syscall=188 success=no exit=-22 a0=c000b52630 a1=c000b78090 a2=c000b52600 a3=25 items=0 ppid=1 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:51.233000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:25:51.359797 kernel: audit: type=1327 audit(1734099951.233:216): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:25:51.360014 kernel: audit: type=1400 audit(1734099951.233:217): avc: denied { mac_admin } for pid=2291 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:25:51.233000 audit[2291]: AVC avc: denied { mac_admin } for pid=2291 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:25:51.233000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:25:51.233000 audit[2291]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00083f5c0 a1=c000b780a8 a2=c000b526c0 a3=25 items=0 ppid=1 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:51.233000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:25:51.382854 kubelet[2291]: E1213 14:25:51.382830 2291 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:25:51.394029 kubelet[2291]: I1213 14:25:51.391639 2291 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:51.408674 kubelet[2291]: I1213 14:25:51.406333 2291 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:51.408674 kubelet[2291]: I1213 14:25:51.406452 2291 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:51.486338 kubelet[2291]: I1213 14:25:51.484881 2291 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:25:51.486338 kubelet[2291]: I1213 14:25:51.484910 2291 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:25:51.486338 kubelet[2291]: I1213 14:25:51.484934 2291 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:25:51.486338 kubelet[2291]: I1213 14:25:51.485140 2291 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:25:51.486338 kubelet[2291]: I1213 14:25:51.485170 2291 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:25:51.486338 kubelet[2291]: I1213 14:25:51.485180 2291 policy_none.go:49] "None policy: Start" Dec 13 14:25:51.486338 kubelet[2291]: I1213 14:25:51.486001 2291 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:25:51.486338 kubelet[2291]: I1213 14:25:51.486032 2291 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:25:51.486338 kubelet[2291]: I1213 14:25:51.486260 2291 state_mem.go:75] "Updated machine memory state" Dec 13 14:25:51.488516 kubelet[2291]: I1213 14:25:51.488469 2291 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:25:51.488000 audit[2291]: AVC avc: denied { mac_admin } for pid=2291 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:25:51.488000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:25:51.488000 audit[2291]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0011cef60 a1=c0011d2480 a2=c0011cef30 a3=25 items=0 ppid=1 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:51.488000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:25:51.490289 kubelet[2291]: I1213 14:25:51.488561 2291 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 14:25:51.490849 kubelet[2291]: I1213 14:25:51.490828 2291 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:25:51.585316 kubelet[2291]: I1213 14:25:51.585163 2291 topology_manager.go:215] "Topology Admit Handler" podUID="e6a85123de15b792572c42bd569b334a" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:51.585316 kubelet[2291]: I1213 14:25:51.585312 2291 topology_manager.go:215] "Topology Admit Handler" podUID="4336f584ee556b3287a91c2b2a134107" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:51.585636 kubelet[2291]: I1213 14:25:51.585395 2291 topology_manager.go:215] "Topology Admit Handler" podUID="78de7dfda62b37103e2e7229f037b45e" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:51.593109 kubelet[2291]: W1213 14:25:51.592716 2291 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 14:25:51.597281 kubelet[2291]: W1213 14:25:51.597246 2291 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 14:25:51.597602 kubelet[2291]: W1213 14:25:51.597264 2291 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 14:25:51.597771 kubelet[2291]: E1213 14:25:51.597578 2291 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:51.684790 kubelet[2291]: I1213 14:25:51.684740 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4336f584ee556b3287a91c2b2a134107-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"4336f584ee556b3287a91c2b2a134107\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:51.685013 kubelet[2291]: I1213 14:25:51.684817 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4336f584ee556b3287a91c2b2a134107-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"4336f584ee556b3287a91c2b2a134107\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:51.685013 kubelet[2291]: I1213 14:25:51.684857 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4336f584ee556b3287a91c2b2a134107-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"4336f584ee556b3287a91c2b2a134107\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:51.685013 kubelet[2291]: I1213 14:25:51.684898 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4336f584ee556b3287a91c2b2a134107-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"4336f584ee556b3287a91c2b2a134107\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:51.685013 kubelet[2291]: I1213 14:25:51.684934 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6a85123de15b792572c42bd569b334a-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"e6a85123de15b792572c42bd569b334a\") " pod="kube-system/kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:51.685234 kubelet[2291]: I1213 14:25:51.684966 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6a85123de15b792572c42bd569b334a-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"e6a85123de15b792572c42bd569b334a\") " pod="kube-system/kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:51.685234 kubelet[2291]: I1213 14:25:51.684998 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6a85123de15b792572c42bd569b334a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"e6a85123de15b792572c42bd569b334a\") " pod="kube-system/kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:51.685234 kubelet[2291]: I1213 14:25:51.685035 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4336f584ee556b3287a91c2b2a134107-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"4336f584ee556b3287a91c2b2a134107\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:51.685234 kubelet[2291]: I1213 14:25:51.685077 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78de7dfda62b37103e2e7229f037b45e-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" (UID: \"78de7dfda62b37103e2e7229f037b45e\") " pod="kube-system/kube-scheduler-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:52.224343 kubelet[2291]: I1213 14:25:52.224292 2291 apiserver.go:52] "Watching apiserver" Dec 13 14:25:52.253092 kubelet[2291]: I1213 14:25:52.253041 2291 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:25:52.440395 kubelet[2291]: W1213 14:25:52.440341 2291 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 14:25:52.440803 kubelet[2291]: E1213 14:25:52.440780 2291 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:25:52.543394 kubelet[2291]: I1213 14:25:52.543238 2291 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" podStartSLOduration=1.543073149 podStartE2EDuration="1.543073149s" podCreationTimestamp="2024-12-13 14:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:25:52.542968626 +0000 UTC m=+1.462007902" watchObservedRunningTime="2024-12-13 14:25:52.543073149 +0000 UTC m=+1.462112429" Dec 13 14:25:52.543644 kubelet[2291]: I1213 14:25:52.543497 2291 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" podStartSLOduration=3.543457838 podStartE2EDuration="3.543457838s" podCreationTimestamp="2024-12-13 14:25:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:25:52.513483968 +0000 UTC m=+1.432523250" watchObservedRunningTime="2024-12-13 14:25:52.543457838 +0000 UTC m=+1.462497119" Dec 13 14:25:52.594660 kubelet[2291]: I1213 14:25:52.594602 2291 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" podStartSLOduration=1.594477519 podStartE2EDuration="1.594477519s" podCreationTimestamp="2024-12-13 14:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:25:52.566181589 +0000 UTC m=+1.485220873" watchObservedRunningTime="2024-12-13 14:25:52.594477519 +0000 UTC m=+1.513516806" Dec 13 14:25:54.740228 update_engine[1319]: I1213 14:25:54.739426 1319 update_attempter.cc:509] Updating boot flags... Dec 13 14:25:57.036382 sudo[1587]: pam_unix(sudo:session): session closed for user root Dec 13 14:25:57.035000 audit[1587]: USER_END pid=1587 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:25:57.041784 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 14:25:57.041929 kernel: audit: type=1106 audit(1734099957.035:219): pid=1587 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:25:57.040000 audit[1587]: CRED_DISP pid=1587 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:25:57.090193 kernel: audit: type=1104 audit(1734099957.040:220): pid=1587 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:25:57.111695 sshd[1583]: pam_unix(sshd:session): session closed for user core Dec 13 14:25:57.112000 audit[1583]: USER_END pid=1583 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:25:57.117080 systemd[1]: sshd@6-10.128.0.74:22-139.178.68.195:45600.service: Deactivated successfully. Dec 13 14:25:57.118971 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:25:57.130536 systemd-logind[1316]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:25:57.132699 systemd-logind[1316]: Removed session 7. Dec 13 14:25:57.112000 audit[1583]: CRED_DISP pid=1583 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:25:57.171067 kernel: audit: type=1106 audit(1734099957.112:221): pid=1583 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:25:57.171279 kernel: audit: type=1104 audit(1734099957.112:222): pid=1583 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:25:57.171455 kernel: audit: type=1131 audit(1734099957.113:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.74:22-139.178.68.195:45600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:57.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.74:22-139.178.68.195:45600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:05.033032 kubelet[2291]: I1213 14:26:05.032987 2291 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:26:05.034776 env[1331]: time="2024-12-13T14:26:05.034671067Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:26:05.035708 kubelet[2291]: I1213 14:26:05.035657 2291 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:26:05.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.74:22-125.94.71.207:58512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:05.535333 systemd[1]: Started sshd@8-10.128.0.74:22-125.94.71.207:58512.service. Dec 13 14:26:05.566274 kernel: audit: type=1130 audit(1734099965.535:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.74:22-125.94.71.207:58512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:05.854475 kubelet[2291]: I1213 14:26:05.854415 2291 topology_manager.go:215] "Topology Admit Handler" podUID="66dc2bd9-4631-4348-b885-e5415296707f" podNamespace="kube-system" podName="kube-proxy-pdsbw" Dec 13 14:26:05.896977 kubelet[2291]: I1213 14:26:05.896937 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66dc2bd9-4631-4348-b885-e5415296707f-kube-proxy\") pod \"kube-proxy-pdsbw\" (UID: \"66dc2bd9-4631-4348-b885-e5415296707f\") " pod="kube-system/kube-proxy-pdsbw" Dec 13 14:26:05.897371 kubelet[2291]: I1213 14:26:05.897332 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66dc2bd9-4631-4348-b885-e5415296707f-lib-modules\") pod \"kube-proxy-pdsbw\" (UID: \"66dc2bd9-4631-4348-b885-e5415296707f\") " pod="kube-system/kube-proxy-pdsbw" Dec 13 14:26:05.897571 kubelet[2291]: I1213 14:26:05.897553 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66dc2bd9-4631-4348-b885-e5415296707f-xtables-lock\") pod \"kube-proxy-pdsbw\" (UID: \"66dc2bd9-4631-4348-b885-e5415296707f\") " pod="kube-system/kube-proxy-pdsbw" Dec 13 14:26:05.897719 kubelet[2291]: I1213 14:26:05.897701 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8vgj\" (UniqueName: \"kubernetes.io/projected/66dc2bd9-4631-4348-b885-e5415296707f-kube-api-access-h8vgj\") pod \"kube-proxy-pdsbw\" (UID: \"66dc2bd9-4631-4348-b885-e5415296707f\") " pod="kube-system/kube-proxy-pdsbw" Dec 13 14:26:06.135598 kubelet[2291]: I1213 14:26:06.135451 2291 topology_manager.go:215] "Topology Admit Handler" podUID="9dbb5cf2-b28e-4e73-aff2-5b06e10c37bd" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-h4kf9" Dec 13 14:26:06.167745 env[1331]: time="2024-12-13T14:26:06.167127936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pdsbw,Uid:66dc2bd9-4631-4348-b885-e5415296707f,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:06.199756 kubelet[2291]: I1213 14:26:06.199694 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9dbb5cf2-b28e-4e73-aff2-5b06e10c37bd-var-lib-calico\") pod \"tigera-operator-c7ccbd65-h4kf9\" (UID: \"9dbb5cf2-b28e-4e73-aff2-5b06e10c37bd\") " pod="tigera-operator/tigera-operator-c7ccbd65-h4kf9" Dec 13 14:26:06.199973 kubelet[2291]: I1213 14:26:06.199773 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqbs6\" (UniqueName: \"kubernetes.io/projected/9dbb5cf2-b28e-4e73-aff2-5b06e10c37bd-kube-api-access-rqbs6\") pod \"tigera-operator-c7ccbd65-h4kf9\" (UID: \"9dbb5cf2-b28e-4e73-aff2-5b06e10c37bd\") " pod="tigera-operator/tigera-operator-c7ccbd65-h4kf9" Dec 13 14:26:06.200261 env[1331]: time="2024-12-13T14:26:06.200151735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:06.200877 env[1331]: time="2024-12-13T14:26:06.200797969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:06.200877 env[1331]: time="2024-12-13T14:26:06.200832988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:06.201800 env[1331]: time="2024-12-13T14:26:06.201689249Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0be5ee8433c94681e38f6651c028bb01faca97d15d06ebb606028d2d0155edee pid=2393 runtime=io.containerd.runc.v2 Dec 13 14:26:06.285031 env[1331]: time="2024-12-13T14:26:06.283197915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pdsbw,Uid:66dc2bd9-4631-4348-b885-e5415296707f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0be5ee8433c94681e38f6651c028bb01faca97d15d06ebb606028d2d0155edee\"" Dec 13 14:26:06.290290 env[1331]: time="2024-12-13T14:26:06.290238134Z" level=info msg="CreateContainer within sandbox \"0be5ee8433c94681e38f6651c028bb01faca97d15d06ebb606028d2d0155edee\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:26:06.326030 env[1331]: time="2024-12-13T14:26:06.325979136Z" level=info msg="CreateContainer within sandbox \"0be5ee8433c94681e38f6651c028bb01faca97d15d06ebb606028d2d0155edee\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c0b62312c3be106466afca747864f72ce4c8bb637f455d5e11a565448de11558\"" Dec 13 14:26:06.328790 env[1331]: time="2024-12-13T14:26:06.328724439Z" level=info msg="StartContainer for \"c0b62312c3be106466afca747864f72ce4c8bb637f455d5e11a565448de11558\"" Dec 13 14:26:06.404133 env[1331]: time="2024-12-13T14:26:06.403998577Z" level=info msg="StartContainer for \"c0b62312c3be106466afca747864f72ce4c8bb637f455d5e11a565448de11558\" returns successfully" Dec 13 14:26:06.442904 env[1331]: time="2024-12-13T14:26:06.442844756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-h4kf9,Uid:9dbb5cf2-b28e-4e73-aff2-5b06e10c37bd,Namespace:tigera-operator,Attempt:0,}" Dec 13 14:26:06.444219 kubelet[2291]: I1213 14:26:06.444168 2291 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pdsbw" podStartSLOduration=1.444095335 podStartE2EDuration="1.444095335s" podCreationTimestamp="2024-12-13 14:26:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:26:06.443594981 +0000 UTC m=+15.362634265" watchObservedRunningTime="2024-12-13 14:26:06.444095335 +0000 UTC m=+15.363134635" Dec 13 14:26:06.469206 env[1331]: time="2024-12-13T14:26:06.468826395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:06.469206 env[1331]: time="2024-12-13T14:26:06.469153232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:06.469690 env[1331]: time="2024-12-13T14:26:06.469187676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:06.470097 env[1331]: time="2024-12-13T14:26:06.470032776Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a7fbccdb68610f67da4ec4bbd1c9bd77c48462e49c0ec944ef316d13eae9e6ec pid=2471 runtime=io.containerd.runc.v2 Dec 13 14:26:06.547000 audit[2522]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.564394 kernel: audit: type=1325 audit(1734099966.547:225): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.547000 audit[2522]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe50174170 a2=0 a3=7ffe5017415c items=0 ppid=2447 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.602400 kernel: audit: type=1300 audit(1734099966.547:225): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe50174170 a2=0 a3=7ffe5017415c items=0 ppid=2447 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.547000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:26:06.625543 kernel: audit: type=1327 audit(1734099966.547:225): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:26:06.627384 env[1331]: time="2024-12-13T14:26:06.627120743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-h4kf9,Uid:9dbb5cf2-b28e-4e73-aff2-5b06e10c37bd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a7fbccdb68610f67da4ec4bbd1c9bd77c48462e49c0ec944ef316d13eae9e6ec\"" Dec 13 14:26:06.547000 audit[2523]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.635556 env[1331]: time="2024-12-13T14:26:06.630729799Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 14:26:06.547000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3cc590a0 a2=0 a3=7ffd3cc5908c items=0 ppid=2447 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.647062 sshd[2382]: Invalid user alexander from 125.94.71.207 port 58512 Dec 13 14:26:06.676858 kernel: audit: type=1325 audit(1734099966.547:226): table=nat:39 family=2 entries=1 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.677034 kernel: audit: type=1300 audit(1734099966.547:226): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3cc590a0 a2=0 a3=7ffd3cc5908c items=0 ppid=2447 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.547000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:26:06.694466 kernel: audit: type=1327 audit(1734099966.547:226): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:26:06.695886 sshd[2382]: Failed password for invalid user alexander from 125.94.71.207 port 58512 ssh2 Dec 13 14:26:06.553000 audit[2524]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.713484 kernel: audit: type=1325 audit(1734099966.553:227): table=filter:40 family=2 entries=1 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.553000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8e171850 a2=0 a3=7fff8e17183c items=0 ppid=2447 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.553000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:26:06.762653 kernel: audit: type=1300 audit(1734099966.553:227): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8e171850 a2=0 a3=7fff8e17183c items=0 ppid=2447 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.762742 kernel: audit: type=1327 audit(1734099966.553:227): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:26:06.558000 audit[2525]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.558000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd42f66bb0 a2=0 a3=7ffd42f66b9c items=0 ppid=2447 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.558000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:26:06.588000 audit[2526]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.588000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe721452a0 a2=0 a3=7ffe7214528c items=0 ppid=2447 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.588000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:26:06.600000 audit[2527]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.600000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd901557d0 a2=0 a3=7ffd901557bc items=0 ppid=2447 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.600000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:26:06.678000 audit[2536]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.678000 audit[2536]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff5ea870b0 a2=0 a3=7fff5ea8709c items=0 ppid=2447 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.678000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 14:26:06.695000 audit[2382]: USER_AUTH pid=2382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="alexander" exe="/usr/sbin/sshd" hostname=125.94.71.207 addr=125.94.71.207 terminal=ssh res=failed' Dec 13 14:26:06.713000 audit[2538]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.713000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd7251aa80 a2=0 a3=7ffd7251aa6c items=0 ppid=2447 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.713000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 13 14:26:06.733000 audit[2541]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.733000 audit[2541]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe28029820 a2=0 a3=7ffe2802980c items=0 ppid=2447 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.733000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 13 14:26:06.733000 audit[2542]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.733000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5a351f80 a2=0 a3=7ffd5a351f6c items=0 ppid=2447 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.733000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 14:26:06.738000 audit[2544]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.738000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd84a50860 a2=0 a3=7ffd84a5084c items=0 ppid=2447 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.738000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 14:26:06.743000 audit[2545]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.743000 audit[2545]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff7fa36670 a2=0 a3=7fff7fa3665c items=0 ppid=2447 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.743000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 14:26:06.746000 audit[2547]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.746000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd4fd572c0 a2=0 a3=7ffd4fd572ac items=0 ppid=2447 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.746000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 14:26:06.752000 audit[2550]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.752000 audit[2550]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc8c086550 a2=0 a3=7ffc8c08653c items=0 ppid=2447 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.752000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 13 14:26:06.757000 audit[2551]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.757000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef5083060 a2=0 a3=7ffef508304c items=0 ppid=2447 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 14:26:06.763000 audit[2553]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.763000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff86e08320 a2=0 a3=7fff86e0830c items=0 ppid=2447 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.763000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 14:26:06.768000 audit[2554]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.768000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee8c0da20 a2=0 a3=7ffee8c0da0c items=0 ppid=2447 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.768000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 14:26:06.774000 audit[2556]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.774000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd5e08490 a2=0 a3=7fffd5e0847c items=0 ppid=2447 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.774000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:26:06.780000 audit[2559]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.780000 audit[2559]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd64180450 a2=0 a3=7ffd6418043c items=0 ppid=2447 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.780000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:26:06.786000 audit[2562]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.786000 audit[2562]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff9a103f40 a2=0 a3=7fff9a103f2c items=0 ppid=2447 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.786000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 14:26:06.788000 audit[2563]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.788000 audit[2563]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffce2eff080 a2=0 a3=7ffce2eff06c items=0 ppid=2447 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.788000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 14:26:06.791000 audit[2565]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.791000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc6f6de2e0 a2=0 a3=7ffc6f6de2cc items=0 ppid=2447 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.791000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:26:06.797000 audit[2568]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.797000 audit[2568]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffb44e7940 a2=0 a3=7fffb44e792c items=0 ppid=2447 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.797000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:26:06.799000 audit[2569]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.799000 audit[2569]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe34c59620 a2=0 a3=7ffe34c5960c items=0 ppid=2447 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 14:26:06.804000 audit[2571]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:26:06.804000 audit[2571]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffce6c60580 a2=0 a3=7ffce6c6056c items=0 ppid=2447 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.804000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 14:26:06.836000 audit[2577]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2577 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:06.836000 audit[2577]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd45998c90 a2=0 a3=7ffd45998c7c items=0 ppid=2447 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.836000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:06.847000 audit[2577]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2577 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:06.847000 audit[2577]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd45998c90 a2=0 a3=7ffd45998c7c items=0 ppid=2447 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:06.849000 audit[2582]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2582 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.849000 audit[2582]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffc37def60 a2=0 a3=7fffc37def4c items=0 ppid=2447 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.849000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 14:26:06.854000 audit[2584]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2584 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.854000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffe3c65b10 a2=0 a3=7fffe3c65afc items=0 ppid=2447 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.854000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 13 14:26:06.860000 audit[2587]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2587 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.860000 audit[2587]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff463d13c0 a2=0 a3=7fff463d13ac items=0 ppid=2447 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.860000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 13 14:26:06.861000 audit[2588]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2588 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.861000 audit[2588]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff38e21660 a2=0 a3=7fff38e2164c items=0 ppid=2447 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.861000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 14:26:06.865000 audit[2590]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2590 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.865000 audit[2590]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff29988ae0 a2=0 a3=7fff29988acc items=0 ppid=2447 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 14:26:06.867000 audit[2591]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2591 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.867000 audit[2591]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf0fb2e90 a2=0 a3=7ffcf0fb2e7c items=0 ppid=2447 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.867000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 14:26:06.870000 audit[2593]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2593 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.870000 audit[2593]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffce706cd40 a2=0 a3=7ffce706cd2c items=0 ppid=2447 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.870000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 13 14:26:06.876000 audit[2596]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2596 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.876000 audit[2596]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd95e8a290 a2=0 a3=7ffd95e8a27c items=0 ppid=2447 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 14:26:06.877000 audit[2597]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2597 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.877000 audit[2597]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbe6942c0 a2=0 a3=7fffbe6942ac items=0 ppid=2447 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.877000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 14:26:06.881000 audit[2599]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2599 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.881000 audit[2599]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeb62da850 a2=0 a3=7ffeb62da83c items=0 ppid=2447 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.881000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 14:26:06.884000 audit[2600]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2600 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.884000 audit[2600]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf77f1d80 a2=0 a3=7ffdf77f1d6c items=0 ppid=2447 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.884000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 14:26:06.888000 audit[2602]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2602 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.888000 audit[2602]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcd24b0ab0 a2=0 a3=7ffcd24b0a9c items=0 ppid=2447 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.888000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:26:06.894000 audit[2605]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2605 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.894000 audit[2605]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff006211f0 a2=0 a3=7fff006211dc items=0 ppid=2447 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.894000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 14:26:06.899000 audit[2608]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2608 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.899000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffc0331a90 a2=0 a3=7fffc0331a7c items=0 ppid=2447 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.899000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 13 14:26:06.901000 audit[2609]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.901000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeb55e2de0 a2=0 a3=7ffeb55e2dcc items=0 ppid=2447 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.901000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 14:26:06.904000 audit[2611]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2611 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.904000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff788062a0 a2=0 a3=7fff7880628c items=0 ppid=2447 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.904000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:26:06.910000 audit[2614]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2614 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.910000 audit[2614]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffcbce736c0 a2=0 a3=7ffcbce736ac items=0 ppid=2447 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.910000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:26:06.912000 audit[2615]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2615 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.912000 audit[2615]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5e000cb0 a2=0 a3=7ffc5e000c9c items=0 ppid=2447 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.912000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 14:26:06.916000 audit[2617]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2617 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.916000 audit[2617]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff5ca23890 a2=0 a3=7fff5ca2387c items=0 ppid=2447 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.916000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 14:26:06.918465 sshd[2382]: Received disconnect from 125.94.71.207 port 58512:11: Bye Bye [preauth] Dec 13 14:26:06.918608 sshd[2382]: Disconnected from invalid user alexander 125.94.71.207 port 58512 [preauth] Dec 13 14:26:06.917000 audit[2618]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2618 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.917000 audit[2618]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff77141d60 a2=0 a3=7fff77141d4c items=0 ppid=2447 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.917000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 14:26:06.920945 systemd[1]: sshd@8-10.128.0.74:22-125.94.71.207:58512.service: Deactivated successfully. Dec 13 14:26:06.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.74:22-125.94.71.207:58512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:06.925000 audit[2622]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2622 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.925000 audit[2622]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffe2ac7ea0 a2=0 a3=7fffe2ac7e8c items=0 ppid=2447 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.925000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:26:06.936000 audit[2625]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2625 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:26:06.936000 audit[2625]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffca0203ae0 a2=0 a3=7ffca0203acc items=0 ppid=2447 pid=2625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:26:06.942000 audit[2627]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2627 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 14:26:06.942000 audit[2627]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffdcaa28ac0 a2=0 a3=7ffdcaa28aac items=0 ppid=2447 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.942000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:06.943000 audit[2627]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2627 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 14:26:06.943000 audit[2627]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffdcaa28ac0 a2=0 a3=7ffdcaa28aac items=0 ppid=2447 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:06.943000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:07.023670 systemd[1]: run-containerd-runc-k8s.io-0be5ee8433c94681e38f6651c028bb01faca97d15d06ebb606028d2d0155edee-runc.TMOuEI.mount: Deactivated successfully. Dec 13 14:26:12.192572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3663795349.mount: Deactivated successfully. Dec 13 14:26:13.117219 env[1331]: time="2024-12-13T14:26:13.117151718Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:13.120159 env[1331]: time="2024-12-13T14:26:13.120100918Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:13.122610 env[1331]: time="2024-12-13T14:26:13.122564475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:13.125474 env[1331]: time="2024-12-13T14:26:13.125429233Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:13.126281 env[1331]: time="2024-12-13T14:26:13.126225349Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 14:26:13.129535 env[1331]: time="2024-12-13T14:26:13.129478900Z" level=info msg="CreateContainer within sandbox \"a7fbccdb68610f67da4ec4bbd1c9bd77c48462e49c0ec944ef316d13eae9e6ec\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 14:26:13.154545 env[1331]: time="2024-12-13T14:26:13.154486830Z" level=info msg="CreateContainer within sandbox \"a7fbccdb68610f67da4ec4bbd1c9bd77c48462e49c0ec944ef316d13eae9e6ec\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4ac71646b2ccb7bbbb15e0200b98a6d370f16d4c1dfd434d3142e37d349eabb0\"" Dec 13 14:26:13.155856 env[1331]: time="2024-12-13T14:26:13.155790075Z" level=info msg="StartContainer for \"4ac71646b2ccb7bbbb15e0200b98a6d370f16d4c1dfd434d3142e37d349eabb0\"" Dec 13 14:26:13.252508 env[1331]: time="2024-12-13T14:26:13.252415729Z" level=info msg="StartContainer for \"4ac71646b2ccb7bbbb15e0200b98a6d370f16d4c1dfd434d3142e37d349eabb0\" returns successfully" Dec 13 14:26:16.551000 audit[2669]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:16.558652 kernel: kauditd_printk_skb: 146 callbacks suppressed Dec 13 14:26:16.558808 kernel: audit: type=1325 audit(1734099976.551:278): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:16.551000 audit[2669]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff05b00d20 a2=0 a3=7fff05b00d0c items=0 ppid=2447 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:16.613399 kernel: audit: type=1300 audit(1734099976.551:278): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff05b00d20 a2=0 a3=7fff05b00d0c items=0 ppid=2447 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:16.551000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:16.644744 kernel: audit: type=1327 audit(1734099976.551:278): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:16.577000 audit[2669]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:16.661272 kubelet[2291]: I1213 14:26:16.657111 2291 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-h4kf9" podStartSLOduration=4.15995054 podStartE2EDuration="10.657050845s" podCreationTimestamp="2024-12-13 14:26:06 +0000 UTC" firstStartedPulling="2024-12-13 14:26:06.629703448 +0000 UTC m=+15.548742721" lastFinishedPulling="2024-12-13 14:26:13.126803753 +0000 UTC m=+22.045843026" observedRunningTime="2024-12-13 14:26:13.467315854 +0000 UTC m=+22.386355151" watchObservedRunningTime="2024-12-13 14:26:16.657050845 +0000 UTC m=+25.576090151" Dec 13 14:26:16.661272 kubelet[2291]: I1213 14:26:16.657454 2291 topology_manager.go:215] "Topology Admit Handler" podUID="6a95c7c3-b5e4-452a-824c-96acee856022" podNamespace="calico-system" podName="calico-typha-bf6bdbb77-7g6xk" Dec 13 14:26:16.661942 kernel: audit: type=1325 audit(1734099976.577:279): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:16.577000 audit[2669]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff05b00d20 a2=0 a3=0 items=0 ppid=2447 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:16.703288 kubelet[2291]: I1213 14:26:16.683096 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9k86\" (UniqueName: \"kubernetes.io/projected/6a95c7c3-b5e4-452a-824c-96acee856022-kube-api-access-w9k86\") pod \"calico-typha-bf6bdbb77-7g6xk\" (UID: \"6a95c7c3-b5e4-452a-824c-96acee856022\") " pod="calico-system/calico-typha-bf6bdbb77-7g6xk" Dec 13 14:26:16.703288 kubelet[2291]: I1213 14:26:16.683150 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6a95c7c3-b5e4-452a-824c-96acee856022-typha-certs\") pod \"calico-typha-bf6bdbb77-7g6xk\" (UID: \"6a95c7c3-b5e4-452a-824c-96acee856022\") " pod="calico-system/calico-typha-bf6bdbb77-7g6xk" Dec 13 14:26:16.703288 kubelet[2291]: I1213 14:26:16.683200 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a95c7c3-b5e4-452a-824c-96acee856022-tigera-ca-bundle\") pod \"calico-typha-bf6bdbb77-7g6xk\" (UID: \"6a95c7c3-b5e4-452a-824c-96acee856022\") " pod="calico-system/calico-typha-bf6bdbb77-7g6xk" Dec 13 14:26:16.703528 kernel: audit: type=1300 audit(1734099976.577:279): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff05b00d20 a2=0 a3=0 items=0 ppid=2447 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:16.577000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:16.721802 kernel: audit: type=1327 audit(1734099976.577:279): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:16.739000 audit[2671]: NETFILTER_CFG table=filter:91 family=2 entries=17 op=nft_register_rule pid=2671 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:16.757384 kernel: audit: type=1325 audit(1734099976.739:280): table=filter:91 family=2 entries=17 op=nft_register_rule pid=2671 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:16.739000 audit[2671]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7ffd04d56070 a2=0 a3=7ffd04d5605c items=0 ppid=2447 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:16.791385 kernel: audit: type=1300 audit(1734099976.739:280): arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7ffd04d56070 a2=0 a3=7ffd04d5605c items=0 ppid=2447 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:16.739000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:16.824000 audit[2671]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2671 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:16.875523 kernel: audit: type=1327 audit(1734099976.739:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:16.875710 kernel: audit: type=1325 audit(1734099976.824:281): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2671 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:16.824000 audit[2671]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd04d56070 a2=0 a3=0 items=0 ppid=2447 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:16.824000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:16.886250 kubelet[2291]: I1213 14:26:16.886203 2291 topology_manager.go:215] "Topology Admit Handler" podUID="35aa5da0-40f1-40cf-be26-24157eb6c03e" podNamespace="calico-system" podName="calico-node-tdt22" Dec 13 14:26:16.964374 env[1331]: time="2024-12-13T14:26:16.964292043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bf6bdbb77-7g6xk,Uid:6a95c7c3-b5e4-452a-824c-96acee856022,Namespace:calico-system,Attempt:0,}" Dec 13 14:26:16.989598 env[1331]: time="2024-12-13T14:26:16.989159140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:16.989598 env[1331]: time="2024-12-13T14:26:16.989269875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:16.989598 env[1331]: time="2024-12-13T14:26:16.989310327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:16.989598 env[1331]: time="2024-12-13T14:26:16.989543602Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1047ba407cc8104e7af524ba7aa3d1223265b21f09f8a45215c5cb406dda5839 pid=2680 runtime=io.containerd.runc.v2 Dec 13 14:26:16.992981 kubelet[2291]: I1213 14:26:16.991549 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35aa5da0-40f1-40cf-be26-24157eb6c03e-xtables-lock\") pod \"calico-node-tdt22\" (UID: \"35aa5da0-40f1-40cf-be26-24157eb6c03e\") " pod="calico-system/calico-node-tdt22" Dec 13 14:26:16.992981 kubelet[2291]: I1213 14:26:16.991614 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/35aa5da0-40f1-40cf-be26-24157eb6c03e-policysync\") pod \"calico-node-tdt22\" (UID: \"35aa5da0-40f1-40cf-be26-24157eb6c03e\") " pod="calico-system/calico-node-tdt22" Dec 13 14:26:16.992981 kubelet[2291]: I1213 14:26:16.991654 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/35aa5da0-40f1-40cf-be26-24157eb6c03e-cni-log-dir\") pod \"calico-node-tdt22\" (UID: \"35aa5da0-40f1-40cf-be26-24157eb6c03e\") " pod="calico-system/calico-node-tdt22" Dec 13 14:26:16.992981 kubelet[2291]: I1213 14:26:16.991697 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69gnt\" (UniqueName: \"kubernetes.io/projected/35aa5da0-40f1-40cf-be26-24157eb6c03e-kube-api-access-69gnt\") pod \"calico-node-tdt22\" (UID: \"35aa5da0-40f1-40cf-be26-24157eb6c03e\") " pod="calico-system/calico-node-tdt22" Dec 13 14:26:16.992981 kubelet[2291]: I1213 14:26:16.991735 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/35aa5da0-40f1-40cf-be26-24157eb6c03e-flexvol-driver-host\") pod \"calico-node-tdt22\" (UID: \"35aa5da0-40f1-40cf-be26-24157eb6c03e\") " pod="calico-system/calico-node-tdt22" Dec 13 14:26:16.993328 kubelet[2291]: I1213 14:26:16.991769 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/35aa5da0-40f1-40cf-be26-24157eb6c03e-node-certs\") pod \"calico-node-tdt22\" (UID: \"35aa5da0-40f1-40cf-be26-24157eb6c03e\") " pod="calico-system/calico-node-tdt22" Dec 13 14:26:16.993328 kubelet[2291]: I1213 14:26:16.991809 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35aa5da0-40f1-40cf-be26-24157eb6c03e-tigera-ca-bundle\") pod \"calico-node-tdt22\" (UID: \"35aa5da0-40f1-40cf-be26-24157eb6c03e\") " pod="calico-system/calico-node-tdt22" Dec 13 14:26:16.993328 kubelet[2291]: I1213 14:26:16.991847 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/35aa5da0-40f1-40cf-be26-24157eb6c03e-cni-bin-dir\") pod \"calico-node-tdt22\" (UID: \"35aa5da0-40f1-40cf-be26-24157eb6c03e\") " pod="calico-system/calico-node-tdt22" Dec 13 14:26:16.993328 kubelet[2291]: I1213 14:26:16.991882 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/35aa5da0-40f1-40cf-be26-24157eb6c03e-var-run-calico\") pod \"calico-node-tdt22\" (UID: \"35aa5da0-40f1-40cf-be26-24157eb6c03e\") " pod="calico-system/calico-node-tdt22" Dec 13 14:26:16.993328 kubelet[2291]: I1213 14:26:16.991919 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35aa5da0-40f1-40cf-be26-24157eb6c03e-lib-modules\") pod \"calico-node-tdt22\" (UID: \"35aa5da0-40f1-40cf-be26-24157eb6c03e\") " pod="calico-system/calico-node-tdt22" Dec 13 14:26:16.993588 kubelet[2291]: I1213 14:26:16.991954 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/35aa5da0-40f1-40cf-be26-24157eb6c03e-var-lib-calico\") pod \"calico-node-tdt22\" (UID: \"35aa5da0-40f1-40cf-be26-24157eb6c03e\") " pod="calico-system/calico-node-tdt22" Dec 13 14:26:16.993588 kubelet[2291]: I1213 14:26:16.991993 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/35aa5da0-40f1-40cf-be26-24157eb6c03e-cni-net-dir\") pod \"calico-node-tdt22\" (UID: \"35aa5da0-40f1-40cf-be26-24157eb6c03e\") " pod="calico-system/calico-node-tdt22" Dec 13 14:26:17.087727 kubelet[2291]: I1213 14:26:17.087659 2291 topology_manager.go:215] "Topology Admit Handler" podUID="e8e7e321-b490-4f2e-961a-6ba46f4b801a" podNamespace="calico-system" podName="csi-node-driver-xl287" Dec 13 14:26:17.088120 kubelet[2291]: E1213 14:26:17.088079 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xl287" podUID="e8e7e321-b490-4f2e-961a-6ba46f4b801a" Dec 13 14:26:17.096813 kubelet[2291]: E1213 14:26:17.096712 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.096813 kubelet[2291]: W1213 14:26:17.096739 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.096813 kubelet[2291]: E1213 14:26:17.096768 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.097247 kubelet[2291]: E1213 14:26:17.097224 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.097247 kubelet[2291]: W1213 14:26:17.097246 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.097439 kubelet[2291]: E1213 14:26:17.097270 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.097604 kubelet[2291]: E1213 14:26:17.097578 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.097604 kubelet[2291]: W1213 14:26:17.097598 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.097754 kubelet[2291]: E1213 14:26:17.097617 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.099483 kubelet[2291]: E1213 14:26:17.097890 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.099483 kubelet[2291]: W1213 14:26:17.097905 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.099483 kubelet[2291]: E1213 14:26:17.097924 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.099483 kubelet[2291]: E1213 14:26:17.098213 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.099483 kubelet[2291]: W1213 14:26:17.098225 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.099483 kubelet[2291]: E1213 14:26:17.098245 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.100624 kubelet[2291]: E1213 14:26:17.100589 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.100624 kubelet[2291]: W1213 14:26:17.100611 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.100801 kubelet[2291]: E1213 14:26:17.100632 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.100926 kubelet[2291]: E1213 14:26:17.100907 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.100926 kubelet[2291]: W1213 14:26:17.100926 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.101069 kubelet[2291]: E1213 14:26:17.100945 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.101257 kubelet[2291]: E1213 14:26:17.101237 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.101257 kubelet[2291]: W1213 14:26:17.101257 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.101447 kubelet[2291]: E1213 14:26:17.101276 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.101633 kubelet[2291]: E1213 14:26:17.101607 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.101633 kubelet[2291]: W1213 14:26:17.101627 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.101768 kubelet[2291]: E1213 14:26:17.101647 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.110981 kubelet[2291]: E1213 14:26:17.104588 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.110981 kubelet[2291]: W1213 14:26:17.104606 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.110981 kubelet[2291]: E1213 14:26:17.104628 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.110981 kubelet[2291]: E1213 14:26:17.104900 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.110981 kubelet[2291]: W1213 14:26:17.104913 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.110981 kubelet[2291]: E1213 14:26:17.104932 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.110981 kubelet[2291]: E1213 14:26:17.105200 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.110981 kubelet[2291]: W1213 14:26:17.105212 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.110981 kubelet[2291]: E1213 14:26:17.105230 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.110981 kubelet[2291]: E1213 14:26:17.105678 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.111646 kubelet[2291]: W1213 14:26:17.105692 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.111646 kubelet[2291]: E1213 14:26:17.105712 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.111646 kubelet[2291]: E1213 14:26:17.109635 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.111646 kubelet[2291]: W1213 14:26:17.109652 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.111646 kubelet[2291]: E1213 14:26:17.109675 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.111646 kubelet[2291]: E1213 14:26:17.109968 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.111646 kubelet[2291]: W1213 14:26:17.109982 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.111646 kubelet[2291]: E1213 14:26:17.110002 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.111646 kubelet[2291]: E1213 14:26:17.110284 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.111646 kubelet[2291]: W1213 14:26:17.110298 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.112124 kubelet[2291]: E1213 14:26:17.110315 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.117106 kubelet[2291]: E1213 14:26:17.117038 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.117106 kubelet[2291]: W1213 14:26:17.117086 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.117106 kubelet[2291]: E1213 14:26:17.117113 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.126783 kubelet[2291]: E1213 14:26:17.126747 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.126783 kubelet[2291]: W1213 14:26:17.126778 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.127030 kubelet[2291]: E1213 14:26:17.126810 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.127998 kubelet[2291]: E1213 14:26:17.127958 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.127998 kubelet[2291]: W1213 14:26:17.127984 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.128210 kubelet[2291]: E1213 14:26:17.128120 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.128347 kubelet[2291]: E1213 14:26:17.128327 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.128347 kubelet[2291]: W1213 14:26:17.128347 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.128563 kubelet[2291]: E1213 14:26:17.128400 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.128727 kubelet[2291]: E1213 14:26:17.128704 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.128727 kubelet[2291]: W1213 14:26:17.128725 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.128869 kubelet[2291]: E1213 14:26:17.128747 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.129053 kubelet[2291]: E1213 14:26:17.129032 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.129053 kubelet[2291]: W1213 14:26:17.129053 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.129197 kubelet[2291]: E1213 14:26:17.129072 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.137892 kubelet[2291]: E1213 14:26:17.137869 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.138043 kubelet[2291]: W1213 14:26:17.138027 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.138123 kubelet[2291]: E1213 14:26:17.138112 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.194347 kubelet[2291]: E1213 14:26:17.194316 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.194619 kubelet[2291]: W1213 14:26:17.194594 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.194743 kubelet[2291]: E1213 14:26:17.194726 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.194875 kubelet[2291]: I1213 14:26:17.194861 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e8e7e321-b490-4f2e-961a-6ba46f4b801a-varrun\") pod \"csi-node-driver-xl287\" (UID: \"e8e7e321-b490-4f2e-961a-6ba46f4b801a\") " pod="calico-system/csi-node-driver-xl287" Dec 13 14:26:17.195386 kubelet[2291]: E1213 14:26:17.195368 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.195547 kubelet[2291]: W1213 14:26:17.195527 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.195661 kubelet[2291]: E1213 14:26:17.195646 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.196203 kubelet[2291]: E1213 14:26:17.196175 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.196203 kubelet[2291]: W1213 14:26:17.196199 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.196385 kubelet[2291]: E1213 14:26:17.196230 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.196385 kubelet[2291]: I1213 14:26:17.196268 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e8e7e321-b490-4f2e-961a-6ba46f4b801a-socket-dir\") pod \"csi-node-driver-xl287\" (UID: \"e8e7e321-b490-4f2e-961a-6ba46f4b801a\") " pod="calico-system/csi-node-driver-xl287" Dec 13 14:26:17.199380 kubelet[2291]: E1213 14:26:17.196586 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.199380 kubelet[2291]: W1213 14:26:17.196604 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.199380 kubelet[2291]: E1213 14:26:17.196623 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.199380 kubelet[2291]: E1213 14:26:17.198244 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.199380 kubelet[2291]: W1213 14:26:17.198259 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.199380 kubelet[2291]: E1213 14:26:17.198285 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.199380 kubelet[2291]: I1213 14:26:17.198321 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsfmc\" (UniqueName: \"kubernetes.io/projected/e8e7e321-b490-4f2e-961a-6ba46f4b801a-kube-api-access-tsfmc\") pod \"csi-node-driver-xl287\" (UID: \"e8e7e321-b490-4f2e-961a-6ba46f4b801a\") " pod="calico-system/csi-node-driver-xl287" Dec 13 14:26:17.199380 kubelet[2291]: E1213 14:26:17.198861 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.199380 kubelet[2291]: W1213 14:26:17.198888 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.199933 env[1331]: time="2024-12-13T14:26:17.197546483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tdt22,Uid:35aa5da0-40f1-40cf-be26-24157eb6c03e,Namespace:calico-system,Attempt:0,}" Dec 13 14:26:17.200009 kubelet[2291]: E1213 14:26:17.199036 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.200009 kubelet[2291]: I1213 14:26:17.199078 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8e7e321-b490-4f2e-961a-6ba46f4b801a-kubelet-dir\") pod \"csi-node-driver-xl287\" (UID: \"e8e7e321-b490-4f2e-961a-6ba46f4b801a\") " pod="calico-system/csi-node-driver-xl287" Dec 13 14:26:17.200009 kubelet[2291]: E1213 14:26:17.199293 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.200009 kubelet[2291]: W1213 14:26:17.199306 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.200306 kubelet[2291]: E1213 14:26:17.199424 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.203148 kubelet[2291]: E1213 14:26:17.200612 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.203148 kubelet[2291]: W1213 14:26:17.200628 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.203148 kubelet[2291]: E1213 14:26:17.200654 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.203148 kubelet[2291]: E1213 14:26:17.201056 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.203148 kubelet[2291]: W1213 14:26:17.201070 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.203148 kubelet[2291]: E1213 14:26:17.201094 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.203148 kubelet[2291]: I1213 14:26:17.201129 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e8e7e321-b490-4f2e-961a-6ba46f4b801a-registration-dir\") pod \"csi-node-driver-xl287\" (UID: \"e8e7e321-b490-4f2e-961a-6ba46f4b801a\") " pod="calico-system/csi-node-driver-xl287" Dec 13 14:26:17.204247 kubelet[2291]: E1213 14:26:17.204219 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.204247 kubelet[2291]: W1213 14:26:17.204245 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.204462 kubelet[2291]: E1213 14:26:17.204274 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.204612 kubelet[2291]: E1213 14:26:17.204591 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.204612 kubelet[2291]: W1213 14:26:17.204612 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.204757 kubelet[2291]: E1213 14:26:17.204636 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.205200 kubelet[2291]: E1213 14:26:17.205161 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.205200 kubelet[2291]: W1213 14:26:17.205183 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.205383 kubelet[2291]: E1213 14:26:17.205208 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.205886 kubelet[2291]: E1213 14:26:17.205844 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.205886 kubelet[2291]: W1213 14:26:17.205868 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.206063 kubelet[2291]: E1213 14:26:17.205899 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.206543 kubelet[2291]: E1213 14:26:17.206504 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.206543 kubelet[2291]: W1213 14:26:17.206527 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.206543 kubelet[2291]: E1213 14:26:17.206547 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.209018 kubelet[2291]: E1213 14:26:17.206854 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.209018 kubelet[2291]: W1213 14:26:17.206867 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.209018 kubelet[2291]: E1213 14:26:17.206900 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.255262 env[1331]: time="2024-12-13T14:26:17.255177108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:17.255546 env[1331]: time="2024-12-13T14:26:17.255502653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:17.255721 env[1331]: time="2024-12-13T14:26:17.255684795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:17.258244 env[1331]: time="2024-12-13T14:26:17.258186811Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c02879e8af8946f46c183d1e6f5cf5df1609216fb9dadee49c15177e06b908e7 pid=2763 runtime=io.containerd.runc.v2 Dec 13 14:26:17.282275 env[1331]: time="2024-12-13T14:26:17.282221449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bf6bdbb77-7g6xk,Uid:6a95c7c3-b5e4-452a-824c-96acee856022,Namespace:calico-system,Attempt:0,} returns sandbox id \"1047ba407cc8104e7af524ba7aa3d1223265b21f09f8a45215c5cb406dda5839\"" Dec 13 14:26:17.285171 env[1331]: time="2024-12-13T14:26:17.285126271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 14:26:17.308409 kubelet[2291]: E1213 14:26:17.308115 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.308409 kubelet[2291]: W1213 14:26:17.308140 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.308409 kubelet[2291]: E1213 14:26:17.308171 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.309729 kubelet[2291]: E1213 14:26:17.308935 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.309729 kubelet[2291]: W1213 14:26:17.308953 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.309729 kubelet[2291]: E1213 14:26:17.308980 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.309729 kubelet[2291]: E1213 14:26:17.309348 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.309729 kubelet[2291]: W1213 14:26:17.309389 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.309729 kubelet[2291]: E1213 14:26:17.309418 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.310533 kubelet[2291]: E1213 14:26:17.310248 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.310533 kubelet[2291]: W1213 14:26:17.310266 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.310533 kubelet[2291]: E1213 14:26:17.310293 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.310956 kubelet[2291]: E1213 14:26:17.310939 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.311099 kubelet[2291]: W1213 14:26:17.311080 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.311325 kubelet[2291]: E1213 14:26:17.311307 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.311801 kubelet[2291]: E1213 14:26:17.311783 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.311965 kubelet[2291]: W1213 14:26:17.311945 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.312235 kubelet[2291]: E1213 14:26:17.312217 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.314641 kubelet[2291]: E1213 14:26:17.314621 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.314803 kubelet[2291]: W1213 14:26:17.314784 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.315063 kubelet[2291]: E1213 14:26:17.315046 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.315423 kubelet[2291]: E1213 14:26:17.315404 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.315551 kubelet[2291]: W1213 14:26:17.315533 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.315880 kubelet[2291]: E1213 14:26:17.315844 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.316214 kubelet[2291]: E1213 14:26:17.316198 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.316340 kubelet[2291]: W1213 14:26:17.316320 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.316596 kubelet[2291]: E1213 14:26:17.316579 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.316937 kubelet[2291]: E1213 14:26:17.316921 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.317065 kubelet[2291]: W1213 14:26:17.317046 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.317300 kubelet[2291]: E1213 14:26:17.317284 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.317646 kubelet[2291]: E1213 14:26:17.317630 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.317797 kubelet[2291]: W1213 14:26:17.317780 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.318092 kubelet[2291]: E1213 14:26:17.318075 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.318436 kubelet[2291]: E1213 14:26:17.318420 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.318585 kubelet[2291]: W1213 14:26:17.318565 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.318818 kubelet[2291]: E1213 14:26:17.318803 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.319644 kubelet[2291]: E1213 14:26:17.319626 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.319772 kubelet[2291]: W1213 14:26:17.319753 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.320058 kubelet[2291]: E1213 14:26:17.320040 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.320461 kubelet[2291]: E1213 14:26:17.320443 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.320598 kubelet[2291]: W1213 14:26:17.320581 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.320803 kubelet[2291]: E1213 14:26:17.320789 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.321152 kubelet[2291]: E1213 14:26:17.321137 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.321305 kubelet[2291]: W1213 14:26:17.321288 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.322131 kubelet[2291]: E1213 14:26:17.322113 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.322546 kubelet[2291]: E1213 14:26:17.322529 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.322714 kubelet[2291]: W1213 14:26:17.322694 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.323439 kubelet[2291]: E1213 14:26:17.323419 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.325100 kubelet[2291]: E1213 14:26:17.325082 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.325286 kubelet[2291]: W1213 14:26:17.325266 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.325608 kubelet[2291]: E1213 14:26:17.325591 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.326742 kubelet[2291]: E1213 14:26:17.326723 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.326894 kubelet[2291]: W1213 14:26:17.326875 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.327180 kubelet[2291]: E1213 14:26:17.327162 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.327628 kubelet[2291]: E1213 14:26:17.327600 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.327768 kubelet[2291]: W1213 14:26:17.327749 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.328073 kubelet[2291]: E1213 14:26:17.328056 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.329661 kubelet[2291]: E1213 14:26:17.329643 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.329798 kubelet[2291]: W1213 14:26:17.329779 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.330195 kubelet[2291]: E1213 14:26:17.330174 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.330739 kubelet[2291]: E1213 14:26:17.330721 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.330873 kubelet[2291]: W1213 14:26:17.330856 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.331123 kubelet[2291]: E1213 14:26:17.331102 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.332218 kubelet[2291]: E1213 14:26:17.332199 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.332407 kubelet[2291]: W1213 14:26:17.332346 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.332663 kubelet[2291]: E1213 14:26:17.332647 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.333038 kubelet[2291]: E1213 14:26:17.333020 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.333164 kubelet[2291]: W1213 14:26:17.333146 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.333414 kubelet[2291]: E1213 14:26:17.333394 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.333821 kubelet[2291]: E1213 14:26:17.333804 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.334004 kubelet[2291]: W1213 14:26:17.333984 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.334304 kubelet[2291]: E1213 14:26:17.334286 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.334672 kubelet[2291]: E1213 14:26:17.334658 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.334794 kubelet[2291]: W1213 14:26:17.334778 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.334921 kubelet[2291]: E1213 14:26:17.334896 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.350737 kubelet[2291]: E1213 14:26:17.350614 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:17.350737 kubelet[2291]: W1213 14:26:17.350642 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:17.350737 kubelet[2291]: E1213 14:26:17.350676 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:17.407584 env[1331]: time="2024-12-13T14:26:17.407525543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tdt22,Uid:35aa5da0-40f1-40cf-be26-24157eb6c03e,Namespace:calico-system,Attempt:0,} returns sandbox id \"c02879e8af8946f46c183d1e6f5cf5df1609216fb9dadee49c15177e06b908e7\"" Dec 13 14:26:17.815863 systemd[1]: run-containerd-runc-k8s.io-1047ba407cc8104e7af524ba7aa3d1223265b21f09f8a45215c5cb406dda5839-runc.OqUvYm.mount: Deactivated successfully. Dec 13 14:26:17.848000 audit[2829]: NETFILTER_CFG table=filter:93 family=2 entries=18 op=nft_register_rule pid=2829 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:17.848000 audit[2829]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7ffc2bbccb30 a2=0 a3=7ffc2bbccb1c items=0 ppid=2447 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:17.848000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:17.854000 audit[2829]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2829 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:17.854000 audit[2829]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc2bbccb30 a2=0 a3=0 items=0 ppid=2447 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:17.854000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:18.483209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3888349835.mount: Deactivated successfully. Dec 13 14:26:19.270714 kubelet[2291]: E1213 14:26:19.270669 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xl287" podUID="e8e7e321-b490-4f2e-961a-6ba46f4b801a" Dec 13 14:26:19.432524 env[1331]: time="2024-12-13T14:26:19.432455717Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:19.435750 env[1331]: time="2024-12-13T14:26:19.435697443Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:19.438384 env[1331]: time="2024-12-13T14:26:19.438316647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:19.441273 env[1331]: time="2024-12-13T14:26:19.441206195Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:19.441673 env[1331]: time="2024-12-13T14:26:19.441630067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 14:26:19.443800 env[1331]: time="2024-12-13T14:26:19.443755981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 14:26:19.472873 env[1331]: time="2024-12-13T14:26:19.472817515Z" level=info msg="CreateContainer within sandbox \"1047ba407cc8104e7af524ba7aa3d1223265b21f09f8a45215c5cb406dda5839\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 14:26:19.492825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3905299910.mount: Deactivated successfully. Dec 13 14:26:19.500202 env[1331]: time="2024-12-13T14:26:19.500145223Z" level=info msg="CreateContainer within sandbox \"1047ba407cc8104e7af524ba7aa3d1223265b21f09f8a45215c5cb406dda5839\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a3b9258356c587222db5d9b8671093e776a1e9af6a6bc5c13382e201c7cbd70f\"" Dec 13 14:26:19.502874 env[1331]: time="2024-12-13T14:26:19.502621969Z" level=info msg="StartContainer for \"a3b9258356c587222db5d9b8671093e776a1e9af6a6bc5c13382e201c7cbd70f\"" Dec 13 14:26:19.611853 env[1331]: time="2024-12-13T14:26:19.611780142Z" level=info msg="StartContainer for \"a3b9258356c587222db5d9b8671093e776a1e9af6a6bc5c13382e201c7cbd70f\" returns successfully" Dec 13 14:26:20.570399 kubelet[2291]: E1213 14:26:20.570174 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.570399 kubelet[2291]: W1213 14:26:20.570203 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.570399 kubelet[2291]: E1213 14:26:20.570236 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.571259 kubelet[2291]: E1213 14:26:20.570600 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.571259 kubelet[2291]: W1213 14:26:20.570618 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.571259 kubelet[2291]: E1213 14:26:20.570650 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.571259 kubelet[2291]: E1213 14:26:20.570956 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.571259 kubelet[2291]: W1213 14:26:20.570972 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.571259 kubelet[2291]: E1213 14:26:20.570990 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.571695 kubelet[2291]: E1213 14:26:20.571276 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.571695 kubelet[2291]: W1213 14:26:20.571289 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.571695 kubelet[2291]: E1213 14:26:20.571310 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.571695 kubelet[2291]: E1213 14:26:20.571629 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.571695 kubelet[2291]: W1213 14:26:20.571643 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.571695 kubelet[2291]: E1213 14:26:20.571664 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.572028 kubelet[2291]: E1213 14:26:20.571927 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.572028 kubelet[2291]: W1213 14:26:20.571939 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.572028 kubelet[2291]: E1213 14:26:20.571958 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.572244 kubelet[2291]: E1213 14:26:20.572225 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.572244 kubelet[2291]: W1213 14:26:20.572241 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.572417 kubelet[2291]: E1213 14:26:20.572260 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.573102 kubelet[2291]: E1213 14:26:20.572570 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.573102 kubelet[2291]: W1213 14:26:20.572585 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.573102 kubelet[2291]: E1213 14:26:20.572605 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.573102 kubelet[2291]: E1213 14:26:20.572956 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.573102 kubelet[2291]: W1213 14:26:20.572968 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.573102 kubelet[2291]: E1213 14:26:20.573000 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.574423 kubelet[2291]: E1213 14:26:20.574399 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.574423 kubelet[2291]: W1213 14:26:20.574419 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.574744 kubelet[2291]: E1213 14:26:20.574441 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.574744 kubelet[2291]: E1213 14:26:20.574737 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.574744 kubelet[2291]: W1213 14:26:20.574751 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.575050 kubelet[2291]: E1213 14:26:20.574774 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.575263 kubelet[2291]: E1213 14:26:20.575196 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.575263 kubelet[2291]: W1213 14:26:20.575242 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.575263 kubelet[2291]: E1213 14:26:20.575264 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.575717 kubelet[2291]: E1213 14:26:20.575698 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.575717 kubelet[2291]: W1213 14:26:20.575719 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.576225 kubelet[2291]: E1213 14:26:20.575742 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.579497 kubelet[2291]: E1213 14:26:20.579272 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.579497 kubelet[2291]: W1213 14:26:20.579293 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.579497 kubelet[2291]: E1213 14:26:20.579322 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.579780 kubelet[2291]: E1213 14:26:20.579738 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.579780 kubelet[2291]: W1213 14:26:20.579754 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.579780 kubelet[2291]: E1213 14:26:20.579777 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.638601 kubelet[2291]: E1213 14:26:20.638148 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.638601 kubelet[2291]: W1213 14:26:20.638175 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.638601 kubelet[2291]: E1213 14:26:20.638209 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.639329 kubelet[2291]: E1213 14:26:20.639073 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.639329 kubelet[2291]: W1213 14:26:20.639093 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.639329 kubelet[2291]: E1213 14:26:20.639124 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.640960 kubelet[2291]: E1213 14:26:20.639695 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.640960 kubelet[2291]: W1213 14:26:20.639713 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.640960 kubelet[2291]: E1213 14:26:20.639740 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.640960 kubelet[2291]: E1213 14:26:20.640108 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.640960 kubelet[2291]: W1213 14:26:20.640124 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.640960 kubelet[2291]: E1213 14:26:20.640151 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.640960 kubelet[2291]: E1213 14:26:20.640490 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.640960 kubelet[2291]: W1213 14:26:20.640504 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.640960 kubelet[2291]: E1213 14:26:20.640631 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.640960 kubelet[2291]: E1213 14:26:20.640844 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.641533 kubelet[2291]: W1213 14:26:20.640856 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.642307 kubelet[2291]: E1213 14:26:20.641634 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.642307 kubelet[2291]: E1213 14:26:20.641871 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.642307 kubelet[2291]: W1213 14:26:20.641883 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.642307 kubelet[2291]: E1213 14:26:20.642005 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.642307 kubelet[2291]: E1213 14:26:20.642197 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.642307 kubelet[2291]: W1213 14:26:20.642207 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.643094 kubelet[2291]: E1213 14:26:20.642720 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.643094 kubelet[2291]: E1213 14:26:20.642954 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.643094 kubelet[2291]: W1213 14:26:20.642966 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.643094 kubelet[2291]: E1213 14:26:20.642991 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.643600 kubelet[2291]: E1213 14:26:20.643583 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.643726 kubelet[2291]: W1213 14:26:20.643708 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.643947 kubelet[2291]: E1213 14:26:20.643932 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.644284 kubelet[2291]: E1213 14:26:20.644268 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.644422 kubelet[2291]: W1213 14:26:20.644405 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.644682 kubelet[2291]: E1213 14:26:20.644667 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.645050 kubelet[2291]: E1213 14:26:20.645033 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.645173 kubelet[2291]: W1213 14:26:20.645156 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.645407 kubelet[2291]: E1213 14:26:20.645392 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.646015 kubelet[2291]: E1213 14:26:20.645997 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.646145 kubelet[2291]: W1213 14:26:20.646126 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.646577 kubelet[2291]: E1213 14:26:20.646557 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.647912 kubelet[2291]: E1213 14:26:20.647894 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.650108 kubelet[2291]: W1213 14:26:20.650084 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.650656 kubelet[2291]: E1213 14:26:20.650637 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.650835 kubelet[2291]: W1213 14:26:20.650813 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.651459 kubelet[2291]: E1213 14:26:20.651437 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.651459 kubelet[2291]: W1213 14:26:20.651457 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.651633 kubelet[2291]: E1213 14:26:20.651482 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.651633 kubelet[2291]: E1213 14:26:20.651524 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.651871 kubelet[2291]: E1213 14:26:20.651853 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.651948 kubelet[2291]: W1213 14:26:20.651873 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.651948 kubelet[2291]: E1213 14:26:20.651894 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.654263 kubelet[2291]: E1213 14:26:20.654243 2291 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:26:20.654432 kubelet[2291]: W1213 14:26:20.654411 2291 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:26:20.654547 kubelet[2291]: E1213 14:26:20.654531 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.654675 kubelet[2291]: E1213 14:26:20.654659 2291 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:26:20.871851 env[1331]: time="2024-12-13T14:26:20.871773861Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:20.876758 env[1331]: time="2024-12-13T14:26:20.875558280Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:20.879921 env[1331]: time="2024-12-13T14:26:20.878774962Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:20.882616 env[1331]: time="2024-12-13T14:26:20.882571310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 14:26:20.882992 env[1331]: time="2024-12-13T14:26:20.881437057Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:20.887418 env[1331]: time="2024-12-13T14:26:20.887096237Z" level=info msg="CreateContainer within sandbox \"c02879e8af8946f46c183d1e6f5cf5df1609216fb9dadee49c15177e06b908e7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 14:26:20.914877 env[1331]: time="2024-12-13T14:26:20.914797936Z" level=info msg="CreateContainer within sandbox \"c02879e8af8946f46c183d1e6f5cf5df1609216fb9dadee49c15177e06b908e7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7ac50c84ca3c0a9287ba6b00ef7c70248f639bb098c825fe9a28ee21d0645293\"" Dec 13 14:26:20.916270 env[1331]: time="2024-12-13T14:26:20.916225146Z" level=info msg="StartContainer for \"7ac50c84ca3c0a9287ba6b00ef7c70248f639bb098c825fe9a28ee21d0645293\"" Dec 13 14:26:21.063534 env[1331]: time="2024-12-13T14:26:21.063477191Z" level=info msg="StartContainer for \"7ac50c84ca3c0a9287ba6b00ef7c70248f639bb098c825fe9a28ee21d0645293\" returns successfully" Dec 13 14:26:21.273056 kubelet[2291]: E1213 14:26:21.271015 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xl287" podUID="e8e7e321-b490-4f2e-961a-6ba46f4b801a" Dec 13 14:26:21.452589 systemd[1]: run-containerd-runc-k8s.io-7ac50c84ca3c0a9287ba6b00ef7c70248f639bb098c825fe9a28ee21d0645293-runc.jh0Bmd.mount: Deactivated successfully. Dec 13 14:26:21.452862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ac50c84ca3c0a9287ba6b00ef7c70248f639bb098c825fe9a28ee21d0645293-rootfs.mount: Deactivated successfully. Dec 13 14:26:21.499703 kubelet[2291]: I1213 14:26:21.499649 2291 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:26:21.522637 kubelet[2291]: I1213 14:26:21.522591 2291 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-bf6bdbb77-7g6xk" podStartSLOduration=3.363978296 podStartE2EDuration="5.522520271s" podCreationTimestamp="2024-12-13 14:26:16 +0000 UTC" firstStartedPulling="2024-12-13 14:26:17.284156246 +0000 UTC m=+26.203195520" lastFinishedPulling="2024-12-13 14:26:19.442698234 +0000 UTC m=+28.361737495" observedRunningTime="2024-12-13 14:26:20.509626236 +0000 UTC m=+29.428665532" watchObservedRunningTime="2024-12-13 14:26:21.522520271 +0000 UTC m=+30.441559562" Dec 13 14:26:21.564788 env[1331]: time="2024-12-13T14:26:21.563939830Z" level=error msg="collecting metrics for 7ac50c84ca3c0a9287ba6b00ef7c70248f639bb098c825fe9a28ee21d0645293" error="cgroups: cgroup deleted: unknown" Dec 13 14:26:21.741772 env[1331]: time="2024-12-13T14:26:21.741441848Z" level=info msg="shim disconnected" id=7ac50c84ca3c0a9287ba6b00ef7c70248f639bb098c825fe9a28ee21d0645293 Dec 13 14:26:21.741772 env[1331]: time="2024-12-13T14:26:21.741552153Z" level=warning msg="cleaning up after shim disconnected" id=7ac50c84ca3c0a9287ba6b00ef7c70248f639bb098c825fe9a28ee21d0645293 namespace=k8s.io Dec 13 14:26:21.741772 env[1331]: time="2024-12-13T14:26:21.741588156Z" level=info msg="cleaning up dead shim" Dec 13 14:26:21.753840 env[1331]: time="2024-12-13T14:26:21.753767507Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2958 runtime=io.containerd.runc.v2\n" Dec 13 14:26:22.504567 env[1331]: time="2024-12-13T14:26:22.504509493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 14:26:23.270648 kubelet[2291]: E1213 14:26:23.270575 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xl287" podUID="e8e7e321-b490-4f2e-961a-6ba46f4b801a" Dec 13 14:26:25.271869 kubelet[2291]: E1213 14:26:25.271824 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xl287" podUID="e8e7e321-b490-4f2e-961a-6ba46f4b801a" Dec 13 14:26:27.228504 env[1331]: time="2024-12-13T14:26:27.228437559Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:27.231568 env[1331]: time="2024-12-13T14:26:27.231497210Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:27.234281 env[1331]: time="2024-12-13T14:26:27.234216800Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:27.236784 env[1331]: time="2024-12-13T14:26:27.236720832Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:27.237685 env[1331]: time="2024-12-13T14:26:27.237621830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 14:26:27.241015 env[1331]: time="2024-12-13T14:26:27.240934547Z" level=info msg="CreateContainer within sandbox \"c02879e8af8946f46c183d1e6f5cf5df1609216fb9dadee49c15177e06b908e7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 14:26:27.267856 env[1331]: time="2024-12-13T14:26:27.267781870Z" level=info msg="CreateContainer within sandbox \"c02879e8af8946f46c183d1e6f5cf5df1609216fb9dadee49c15177e06b908e7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3437c11beb1eca79e234c9012416c6231f2edca07f1d2e39b829516d2491d617\"" Dec 13 14:26:27.269850 env[1331]: time="2024-12-13T14:26:27.268484516Z" level=info msg="StartContainer for \"3437c11beb1eca79e234c9012416c6231f2edca07f1d2e39b829516d2491d617\"" Dec 13 14:26:27.270837 kubelet[2291]: E1213 14:26:27.270785 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xl287" podUID="e8e7e321-b490-4f2e-961a-6ba46f4b801a" Dec 13 14:26:27.364831 env[1331]: time="2024-12-13T14:26:27.364728126Z" level=info msg="StartContainer for \"3437c11beb1eca79e234c9012416c6231f2edca07f1d2e39b829516d2491d617\" returns successfully" Dec 13 14:26:28.265098 env[1331]: time="2024-12-13T14:26:28.265005862Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:26:28.305861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3437c11beb1eca79e234c9012416c6231f2edca07f1d2e39b829516d2491d617-rootfs.mount: Deactivated successfully. Dec 13 14:26:28.350280 kubelet[2291]: I1213 14:26:28.346656 2291 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:26:28.378659 kubelet[2291]: I1213 14:26:28.378615 2291 topology_manager.go:215] "Topology Admit Handler" podUID="e8fdff54-c28c-49b3-874a-502acca12325" podNamespace="kube-system" podName="coredns-76f75df574-7f6gg" Dec 13 14:26:28.394397 kubelet[2291]: I1213 14:26:28.394111 2291 topology_manager.go:215] "Topology Admit Handler" podUID="35d24807-89c8-42de-a1a0-7e24511228d9" podNamespace="calico-apiserver" podName="calico-apiserver-67448fdc7d-2zp8b" Dec 13 14:26:28.406274 kubelet[2291]: W1213 14:26:28.406234 2291 reflector.go:539] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal' and this object Dec 13 14:26:28.406274 kubelet[2291]: E1213 14:26:28.406288 2291 reflector.go:147] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal' and this object Dec 13 14:26:28.408124 kubelet[2291]: W1213 14:26:28.408082 2291 reflector.go:539] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal' and this object Dec 13 14:26:28.408309 kubelet[2291]: E1213 14:26:28.408133 2291 reflector.go:147] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal' and this object Dec 13 14:26:28.408309 kubelet[2291]: I1213 14:26:28.408194 2291 topology_manager.go:215] "Topology Admit Handler" podUID="c2a62712-176a-4d0f-9d03-df8cafed69c7" podNamespace="calico-system" podName="calico-kube-controllers-8c579667c-4vgnc" Dec 13 14:26:28.408510 kubelet[2291]: I1213 14:26:28.408441 2291 topology_manager.go:215] "Topology Admit Handler" podUID="344bd3be-fc59-4092-b9f4-59fe040c7639" podNamespace="calico-apiserver" podName="calico-apiserver-67448fdc7d-mzvdz" Dec 13 14:26:28.413643 kubelet[2291]: I1213 14:26:28.413608 2291 topology_manager.go:215] "Topology Admit Handler" podUID="d7e4fbf8-2010-40d9-9761-00fa99980147" podNamespace="kube-system" podName="coredns-76f75df574-fcccw" Dec 13 14:26:28.508489 kubelet[2291]: I1213 14:26:28.508441 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwkxc\" (UniqueName: \"kubernetes.io/projected/d7e4fbf8-2010-40d9-9761-00fa99980147-kube-api-access-fwkxc\") pod \"coredns-76f75df574-fcccw\" (UID: \"d7e4fbf8-2010-40d9-9761-00fa99980147\") " pod="kube-system/coredns-76f75df574-fcccw" Dec 13 14:26:28.508753 kubelet[2291]: I1213 14:26:28.508654 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/35d24807-89c8-42de-a1a0-7e24511228d9-calico-apiserver-certs\") pod \"calico-apiserver-67448fdc7d-2zp8b\" (UID: \"35d24807-89c8-42de-a1a0-7e24511228d9\") " pod="calico-apiserver/calico-apiserver-67448fdc7d-2zp8b" Dec 13 14:26:28.508753 kubelet[2291]: I1213 14:26:28.508730 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8fdff54-c28c-49b3-874a-502acca12325-config-volume\") pod \"coredns-76f75df574-7f6gg\" (UID: \"e8fdff54-c28c-49b3-874a-502acca12325\") " pod="kube-system/coredns-76f75df574-7f6gg" Dec 13 14:26:28.508897 kubelet[2291]: I1213 14:26:28.508771 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tmmt\" (UniqueName: \"kubernetes.io/projected/e8fdff54-c28c-49b3-874a-502acca12325-kube-api-access-7tmmt\") pod \"coredns-76f75df574-7f6gg\" (UID: \"e8fdff54-c28c-49b3-874a-502acca12325\") " pod="kube-system/coredns-76f75df574-7f6gg" Dec 13 14:26:28.508897 kubelet[2291]: I1213 14:26:28.508838 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/344bd3be-fc59-4092-b9f4-59fe040c7639-calico-apiserver-certs\") pod \"calico-apiserver-67448fdc7d-mzvdz\" (UID: \"344bd3be-fc59-4092-b9f4-59fe040c7639\") " pod="calico-apiserver/calico-apiserver-67448fdc7d-mzvdz" Dec 13 14:26:28.508897 kubelet[2291]: I1213 14:26:28.508894 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zccf\" (UniqueName: \"kubernetes.io/projected/344bd3be-fc59-4092-b9f4-59fe040c7639-kube-api-access-2zccf\") pod \"calico-apiserver-67448fdc7d-mzvdz\" (UID: \"344bd3be-fc59-4092-b9f4-59fe040c7639\") " pod="calico-apiserver/calico-apiserver-67448fdc7d-mzvdz" Dec 13 14:26:28.509097 kubelet[2291]: I1213 14:26:28.508935 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7e4fbf8-2010-40d9-9761-00fa99980147-config-volume\") pod \"coredns-76f75df574-fcccw\" (UID: \"d7e4fbf8-2010-40d9-9761-00fa99980147\") " pod="kube-system/coredns-76f75df574-fcccw" Dec 13 14:26:28.509097 kubelet[2291]: I1213 14:26:28.509022 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82pbf\" (UniqueName: \"kubernetes.io/projected/35d24807-89c8-42de-a1a0-7e24511228d9-kube-api-access-82pbf\") pod \"calico-apiserver-67448fdc7d-2zp8b\" (UID: \"35d24807-89c8-42de-a1a0-7e24511228d9\") " pod="calico-apiserver/calico-apiserver-67448fdc7d-2zp8b" Dec 13 14:26:28.509215 kubelet[2291]: I1213 14:26:28.509100 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2a62712-176a-4d0f-9d03-df8cafed69c7-tigera-ca-bundle\") pod \"calico-kube-controllers-8c579667c-4vgnc\" (UID: \"c2a62712-176a-4d0f-9d03-df8cafed69c7\") " pod="calico-system/calico-kube-controllers-8c579667c-4vgnc" Dec 13 14:26:28.509215 kubelet[2291]: I1213 14:26:28.509171 2291 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrqbm\" (UniqueName: \"kubernetes.io/projected/c2a62712-176a-4d0f-9d03-df8cafed69c7-kube-api-access-xrqbm\") pod \"calico-kube-controllers-8c579667c-4vgnc\" (UID: \"c2a62712-176a-4d0f-9d03-df8cafed69c7\") " pod="calico-system/calico-kube-controllers-8c579667c-4vgnc" Dec 13 14:26:28.693605 env[1331]: time="2024-12-13T14:26:28.693527758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7f6gg,Uid:e8fdff54-c28c-49b3-874a-502acca12325,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:28.723809 env[1331]: time="2024-12-13T14:26:28.723738810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c579667c-4vgnc,Uid:c2a62712-176a-4d0f-9d03-df8cafed69c7,Namespace:calico-system,Attempt:0,}" Dec 13 14:26:28.735342 env[1331]: time="2024-12-13T14:26:28.735286407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fcccw,Uid:d7e4fbf8-2010-40d9-9761-00fa99980147,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:29.069312 env[1331]: time="2024-12-13T14:26:29.069127377Z" level=info msg="shim disconnected" id=3437c11beb1eca79e234c9012416c6231f2edca07f1d2e39b829516d2491d617 Dec 13 14:26:29.069312 env[1331]: time="2024-12-13T14:26:29.069194369Z" level=warning msg="cleaning up after shim disconnected" id=3437c11beb1eca79e234c9012416c6231f2edca07f1d2e39b829516d2491d617 namespace=k8s.io Dec 13 14:26:29.069312 env[1331]: time="2024-12-13T14:26:29.069210682Z" level=info msg="cleaning up dead shim" Dec 13 14:26:29.101751 env[1331]: time="2024-12-13T14:26:29.101701303Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3026 runtime=io.containerd.runc.v2\n" Dec 13 14:26:29.255698 env[1331]: time="2024-12-13T14:26:29.255607501Z" level=error msg="Failed to destroy network for sandbox \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.256200 env[1331]: time="2024-12-13T14:26:29.256137177Z" level=error msg="encountered an error cleaning up failed sandbox \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.256341 env[1331]: time="2024-12-13T14:26:29.256231084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7f6gg,Uid:e8fdff54-c28c-49b3-874a-502acca12325,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.256610 kubelet[2291]: E1213 14:26:29.256579 2291 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.256767 kubelet[2291]: E1213 14:26:29.256663 2291 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7f6gg" Dec 13 14:26:29.256767 kubelet[2291]: E1213 14:26:29.256699 2291 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7f6gg" Dec 13 14:26:29.256885 kubelet[2291]: E1213 14:26:29.256774 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-7f6gg_kube-system(e8fdff54-c28c-49b3-874a-502acca12325)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-7f6gg_kube-system(e8fdff54-c28c-49b3-874a-502acca12325)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-7f6gg" podUID="e8fdff54-c28c-49b3-874a-502acca12325" Dec 13 14:26:29.281140 env[1331]: time="2024-12-13T14:26:29.281073660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xl287,Uid:e8e7e321-b490-4f2e-961a-6ba46f4b801a,Namespace:calico-system,Attempt:0,}" Dec 13 14:26:29.334569 env[1331]: time="2024-12-13T14:26:29.333606590Z" level=error msg="Failed to destroy network for sandbox \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.339488 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b-shm.mount: Deactivated successfully. Dec 13 14:26:29.343542 env[1331]: time="2024-12-13T14:26:29.343461180Z" level=error msg="encountered an error cleaning up failed sandbox \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.343855 env[1331]: time="2024-12-13T14:26:29.343760800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c579667c-4vgnc,Uid:c2a62712-176a-4d0f-9d03-df8cafed69c7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.344466 kubelet[2291]: E1213 14:26:29.344348 2291 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.344466 kubelet[2291]: E1213 14:26:29.344437 2291 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8c579667c-4vgnc" Dec 13 14:26:29.344466 kubelet[2291]: E1213 14:26:29.344470 2291 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8c579667c-4vgnc" Dec 13 14:26:29.344723 kubelet[2291]: E1213 14:26:29.344560 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8c579667c-4vgnc_calico-system(c2a62712-176a-4d0f-9d03-df8cafed69c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8c579667c-4vgnc_calico-system(c2a62712-176a-4d0f-9d03-df8cafed69c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8c579667c-4vgnc" podUID="c2a62712-176a-4d0f-9d03-df8cafed69c7" Dec 13 14:26:29.354409 env[1331]: time="2024-12-13T14:26:29.348672861Z" level=error msg="Failed to destroy network for sandbox \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.354409 env[1331]: time="2024-12-13T14:26:29.353585102Z" level=error msg="encountered an error cleaning up failed sandbox \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.354409 env[1331]: time="2024-12-13T14:26:29.353673278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fcccw,Uid:d7e4fbf8-2010-40d9-9761-00fa99980147,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.353295 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2-shm.mount: Deactivated successfully. Dec 13 14:26:29.356167 kubelet[2291]: E1213 14:26:29.356130 2291 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.358367 kubelet[2291]: E1213 14:26:29.356202 2291 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-fcccw" Dec 13 14:26:29.358367 kubelet[2291]: E1213 14:26:29.356243 2291 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-fcccw" Dec 13 14:26:29.358367 kubelet[2291]: E1213 14:26:29.356417 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-fcccw_kube-system(d7e4fbf8-2010-40d9-9761-00fa99980147)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-fcccw_kube-system(d7e4fbf8-2010-40d9-9761-00fa99980147)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-fcccw" podUID="d7e4fbf8-2010-40d9-9761-00fa99980147" Dec 13 14:26:29.415044 env[1331]: time="2024-12-13T14:26:29.414966566Z" level=error msg="Failed to destroy network for sandbox \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.419616 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f-shm.mount: Deactivated successfully. Dec 13 14:26:29.422528 env[1331]: time="2024-12-13T14:26:29.422431616Z" level=error msg="encountered an error cleaning up failed sandbox \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.422738 env[1331]: time="2024-12-13T14:26:29.422545993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xl287,Uid:e8e7e321-b490-4f2e-961a-6ba46f4b801a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.422901 kubelet[2291]: E1213 14:26:29.422831 2291 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.423023 kubelet[2291]: E1213 14:26:29.422946 2291 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xl287" Dec 13 14:26:29.423023 kubelet[2291]: E1213 14:26:29.422980 2291 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xl287" Dec 13 14:26:29.423139 kubelet[2291]: E1213 14:26:29.423054 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xl287_calico-system(e8e7e321-b490-4f2e-961a-6ba46f4b801a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xl287_calico-system(e8e7e321-b490-4f2e-961a-6ba46f4b801a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xl287" podUID="e8e7e321-b490-4f2e-961a-6ba46f4b801a" Dec 13 14:26:29.526924 kubelet[2291]: I1213 14:26:29.526887 2291 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Dec 13 14:26:29.527971 env[1331]: time="2024-12-13T14:26:29.527921375Z" level=info msg="StopPodSandbox for \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\"" Dec 13 14:26:29.530270 kubelet[2291]: I1213 14:26:29.530232 2291 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Dec 13 14:26:29.533985 env[1331]: time="2024-12-13T14:26:29.532308899Z" level=info msg="StopPodSandbox for \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\"" Dec 13 14:26:29.535030 kubelet[2291]: I1213 14:26:29.534992 2291 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Dec 13 14:26:29.538034 env[1331]: time="2024-12-13T14:26:29.537964302Z" level=info msg="StopPodSandbox for \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\"" Dec 13 14:26:29.556904 env[1331]: time="2024-12-13T14:26:29.556858238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 14:26:29.574160 kubelet[2291]: I1213 14:26:29.565128 2291 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Dec 13 14:26:29.574430 env[1331]: time="2024-12-13T14:26:29.566328789Z" level=info msg="StopPodSandbox for \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\"" Dec 13 14:26:29.608472 env[1331]: time="2024-12-13T14:26:29.608407160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67448fdc7d-2zp8b,Uid:35d24807-89c8-42de-a1a0-7e24511228d9,Namespace:calico-apiserver,Attempt:0,}" Dec 13 14:26:29.651464 env[1331]: time="2024-12-13T14:26:29.649869473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67448fdc7d-mzvdz,Uid:344bd3be-fc59-4092-b9f4-59fe040c7639,Namespace:calico-apiserver,Attempt:0,}" Dec 13 14:26:29.694893 env[1331]: time="2024-12-13T14:26:29.694823082Z" level=error msg="StopPodSandbox for \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\" failed" error="failed to destroy network for sandbox \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.695230 env[1331]: time="2024-12-13T14:26:29.694823085Z" level=error msg="StopPodSandbox for \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\" failed" error="failed to destroy network for sandbox \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.695822 kubelet[2291]: E1213 14:26:29.695487 2291 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Dec 13 14:26:29.695822 kubelet[2291]: E1213 14:26:29.695631 2291 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f"} Dec 13 14:26:29.695822 kubelet[2291]: E1213 14:26:29.695718 2291 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8e7e321-b490-4f2e-961a-6ba46f4b801a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:26:29.695822 kubelet[2291]: E1213 14:26:29.695792 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8e7e321-b490-4f2e-961a-6ba46f4b801a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xl287" podUID="e8e7e321-b490-4f2e-961a-6ba46f4b801a" Dec 13 14:26:29.696634 kubelet[2291]: E1213 14:26:29.696606 2291 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Dec 13 14:26:29.696784 kubelet[2291]: E1213 14:26:29.696655 2291 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2"} Dec 13 14:26:29.696784 kubelet[2291]: E1213 14:26:29.696705 2291 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7e4fbf8-2010-40d9-9761-00fa99980147\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:26:29.696784 kubelet[2291]: E1213 14:26:29.696747 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7e4fbf8-2010-40d9-9761-00fa99980147\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-fcccw" podUID="d7e4fbf8-2010-40d9-9761-00fa99980147" Dec 13 14:26:29.714411 env[1331]: time="2024-12-13T14:26:29.714308358Z" level=error msg="StopPodSandbox for \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\" failed" error="failed to destroy network for sandbox \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.715048 kubelet[2291]: E1213 14:26:29.715013 2291 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Dec 13 14:26:29.715190 kubelet[2291]: E1213 14:26:29.715074 2291 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e"} Dec 13 14:26:29.715190 kubelet[2291]: E1213 14:26:29.715133 2291 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8fdff54-c28c-49b3-874a-502acca12325\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:26:29.715399 kubelet[2291]: E1213 14:26:29.715192 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8fdff54-c28c-49b3-874a-502acca12325\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-7f6gg" podUID="e8fdff54-c28c-49b3-874a-502acca12325" Dec 13 14:26:29.734476 env[1331]: time="2024-12-13T14:26:29.734346170Z" level=error msg="StopPodSandbox for \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\" failed" error="failed to destroy network for sandbox \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.734757 kubelet[2291]: E1213 14:26:29.734726 2291 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Dec 13 14:26:29.734872 kubelet[2291]: E1213 14:26:29.734790 2291 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b"} Dec 13 14:26:29.734872 kubelet[2291]: E1213 14:26:29.734848 2291 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c2a62712-176a-4d0f-9d03-df8cafed69c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:26:29.735045 kubelet[2291]: E1213 14:26:29.734897 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c2a62712-176a-4d0f-9d03-df8cafed69c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8c579667c-4vgnc" podUID="c2a62712-176a-4d0f-9d03-df8cafed69c7" Dec 13 14:26:29.804532 env[1331]: time="2024-12-13T14:26:29.804452854Z" level=error msg="Failed to destroy network for sandbox \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.805332 env[1331]: time="2024-12-13T14:26:29.805272589Z" level=error msg="encountered an error cleaning up failed sandbox \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.805644 env[1331]: time="2024-12-13T14:26:29.805571384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67448fdc7d-2zp8b,Uid:35d24807-89c8-42de-a1a0-7e24511228d9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.806214 kubelet[2291]: E1213 14:26:29.806172 2291 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.806346 kubelet[2291]: E1213 14:26:29.806294 2291 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67448fdc7d-2zp8b" Dec 13 14:26:29.806346 kubelet[2291]: E1213 14:26:29.806330 2291 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67448fdc7d-2zp8b" Dec 13 14:26:29.806521 kubelet[2291]: E1213 14:26:29.806447 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67448fdc7d-2zp8b_calico-apiserver(35d24807-89c8-42de-a1a0-7e24511228d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67448fdc7d-2zp8b_calico-apiserver(35d24807-89c8-42de-a1a0-7e24511228d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67448fdc7d-2zp8b" podUID="35d24807-89c8-42de-a1a0-7e24511228d9" Dec 13 14:26:29.823743 env[1331]: time="2024-12-13T14:26:29.823659534Z" level=error msg="Failed to destroy network for sandbox \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.824188 env[1331]: time="2024-12-13T14:26:29.824116788Z" level=error msg="encountered an error cleaning up failed sandbox \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.824321 env[1331]: time="2024-12-13T14:26:29.824191741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67448fdc7d-mzvdz,Uid:344bd3be-fc59-4092-b9f4-59fe040c7639,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.824651 kubelet[2291]: E1213 14:26:29.824596 2291 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:29.824773 kubelet[2291]: E1213 14:26:29.824670 2291 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67448fdc7d-mzvdz" Dec 13 14:26:29.824773 kubelet[2291]: E1213 14:26:29.824710 2291 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67448fdc7d-mzvdz" Dec 13 14:26:29.824889 kubelet[2291]: E1213 14:26:29.824790 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67448fdc7d-mzvdz_calico-apiserver(344bd3be-fc59-4092-b9f4-59fe040c7639)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67448fdc7d-mzvdz_calico-apiserver(344bd3be-fc59-4092-b9f4-59fe040c7639)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67448fdc7d-mzvdz" podUID="344bd3be-fc59-4092-b9f4-59fe040c7639" Dec 13 14:26:30.568897 kubelet[2291]: I1213 14:26:30.568850 2291 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Dec 13 14:26:30.569851 env[1331]: time="2024-12-13T14:26:30.569806951Z" level=info msg="StopPodSandbox for \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\"" Dec 13 14:26:30.572459 kubelet[2291]: I1213 14:26:30.572414 2291 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Dec 13 14:26:30.576402 env[1331]: time="2024-12-13T14:26:30.574311778Z" level=info msg="StopPodSandbox for \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\"" Dec 13 14:26:30.629081 env[1331]: time="2024-12-13T14:26:30.629007303Z" level=error msg="StopPodSandbox for \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\" failed" error="failed to destroy network for sandbox \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:30.629929 kubelet[2291]: E1213 14:26:30.629521 2291 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Dec 13 14:26:30.629929 kubelet[2291]: E1213 14:26:30.629597 2291 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891"} Dec 13 14:26:30.629929 kubelet[2291]: E1213 14:26:30.629654 2291 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"35d24807-89c8-42de-a1a0-7e24511228d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:26:30.629929 kubelet[2291]: E1213 14:26:30.629706 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"35d24807-89c8-42de-a1a0-7e24511228d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67448fdc7d-2zp8b" podUID="35d24807-89c8-42de-a1a0-7e24511228d9" Dec 13 14:26:30.635615 env[1331]: time="2024-12-13T14:26:30.635548564Z" level=error msg="StopPodSandbox for \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\" failed" error="failed to destroy network for sandbox \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:26:30.635836 kubelet[2291]: E1213 14:26:30.635807 2291 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Dec 13 14:26:30.635955 kubelet[2291]: E1213 14:26:30.635859 2291 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36"} Dec 13 14:26:30.635955 kubelet[2291]: E1213 14:26:30.635912 2291 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"344bd3be-fc59-4092-b9f4-59fe040c7639\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:26:30.635955 kubelet[2291]: E1213 14:26:30.635954 2291 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"344bd3be-fc59-4092-b9f4-59fe040c7639\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67448fdc7d-mzvdz" podUID="344bd3be-fc59-4092-b9f4-59fe040c7639" Dec 13 14:26:37.237818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2118144726.mount: Deactivated successfully. Dec 13 14:26:37.249002 kubelet[2291]: I1213 14:26:37.248855 2291 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:26:37.288547 env[1331]: time="2024-12-13T14:26:37.288485806Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:37.293271 env[1331]: time="2024-12-13T14:26:37.293224115Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:37.299384 env[1331]: time="2024-12-13T14:26:37.298982732Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:37.323441 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 13 14:26:37.323632 kernel: audit: type=1325 audit(1734099997.302:284): table=filter:95 family=2 entries=17 op=nft_register_rule pid=3357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:37.302000 audit[3357]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=3357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:37.302000 audit[3357]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff8e124610 a2=0 a3=7fff8e1245fc items=0 ppid=2447 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:37.302000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:37.373794 kernel: audit: type=1300 audit(1734099997.302:284): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff8e124610 a2=0 a3=7fff8e1245fc items=0 ppid=2447 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:37.373953 kernel: audit: type=1327 audit(1734099997.302:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:37.374971 env[1331]: time="2024-12-13T14:26:37.374914153Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:37.375613 env[1331]: time="2024-12-13T14:26:37.375574350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 14:26:37.377000 audit[3357]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:37.377000 audit[3357]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff8e124610 a2=0 a3=7fff8e1245fc items=0 ppid=2447 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:37.424543 env[1331]: time="2024-12-13T14:26:37.424300389Z" level=info msg="CreateContainer within sandbox \"c02879e8af8946f46c183d1e6f5cf5df1609216fb9dadee49c15177e06b908e7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 14:26:37.428854 kernel: audit: type=1325 audit(1734099997.377:285): table=nat:96 family=2 entries=19 op=nft_register_chain pid=3357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:37.429002 kernel: audit: type=1300 audit(1734099997.377:285): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff8e124610 a2=0 a3=7fff8e1245fc items=0 ppid=2447 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:37.432399 kernel: audit: type=1327 audit(1734099997.377:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:37.377000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:37.464375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount178346749.mount: Deactivated successfully. Dec 13 14:26:37.474210 env[1331]: time="2024-12-13T14:26:37.474122724Z" level=info msg="CreateContainer within sandbox \"c02879e8af8946f46c183d1e6f5cf5df1609216fb9dadee49c15177e06b908e7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c66bbd4c8d3bcaa6d383c434d50c62e8913d93b7f73ec3a3fbcf5a80dc9ded7c\"" Dec 13 14:26:37.475320 env[1331]: time="2024-12-13T14:26:37.475279716Z" level=info msg="StartContainer for \"c66bbd4c8d3bcaa6d383c434d50c62e8913d93b7f73ec3a3fbcf5a80dc9ded7c\"" Dec 13 14:26:37.551602 env[1331]: time="2024-12-13T14:26:37.550400414Z" level=info msg="StartContainer for \"c66bbd4c8d3bcaa6d383c434d50c62e8913d93b7f73ec3a3fbcf5a80dc9ded7c\" returns successfully" Dec 13 14:26:37.619230 kubelet[2291]: I1213 14:26:37.618771 2291 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-tdt22" podStartSLOduration=1.650995817 podStartE2EDuration="21.618679601s" podCreationTimestamp="2024-12-13 14:26:16 +0000 UTC" firstStartedPulling="2024-12-13 14:26:17.40910687 +0000 UTC m=+26.328146144" lastFinishedPulling="2024-12-13 14:26:37.376790649 +0000 UTC m=+46.295829928" observedRunningTime="2024-12-13 14:26:37.617451251 +0000 UTC m=+46.536490535" watchObservedRunningTime="2024-12-13 14:26:37.618679601 +0000 UTC m=+46.537718889" Dec 13 14:26:37.729762 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 14:26:37.729928 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 14:26:38.630914 systemd[1]: run-containerd-runc-k8s.io-c66bbd4c8d3bcaa6d383c434d50c62e8913d93b7f73ec3a3fbcf5a80dc9ded7c-runc.YALZOt.mount: Deactivated successfully. Dec 13 14:26:39.136000 audit[3530]: AVC avc: denied { write } for pid=3530 comm="tee" name="fd" dev="proc" ino=25238 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:26:39.158422 kernel: audit: type=1400 audit(1734099999.136:286): avc: denied { write } for pid=3530 comm="tee" name="fd" dev="proc" ino=25238 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:26:39.136000 audit[3530]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc383199b4 a2=241 a3=1b6 items=1 ppid=3488 pid=3530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.202395 kernel: audit: type=1300 audit(1734099999.136:286): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc383199b4 a2=241 a3=1b6 items=1 ppid=3488 pid=3530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.136000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Dec 13 14:26:39.136000 audit: PATH item=0 name="/dev/fd/63" inode=24089 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:39.248602 kernel: audit: type=1307 audit(1734099999.136:286): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Dec 13 14:26:39.248779 kernel: audit: type=1302 audit(1734099999.136:286): item=0 name="/dev/fd/63" inode=24089 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:39.136000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:26:39.167000 audit[3526]: AVC avc: denied { write } for pid=3526 comm="tee" name="fd" dev="proc" ino=24103 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:26:39.167000 audit[3526]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd4c0fc9c4 a2=241 a3=1b6 items=1 ppid=3479 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.167000 audit: CWD cwd="/etc/service/enabled/bird6/log" Dec 13 14:26:39.167000 audit: PATH item=0 name="/dev/fd/63" inode=24088 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:39.167000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:26:39.169000 audit[3533]: AVC avc: denied { write } for pid=3533 comm="tee" name="fd" dev="proc" ino=25244 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:26:39.169000 audit[3533]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff4ee139c4 a2=241 a3=1b6 items=1 ppid=3475 pid=3533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.169000 audit: CWD cwd="/etc/service/enabled/felix/log" Dec 13 14:26:39.169000 audit: PATH item=0 name="/dev/fd/63" inode=24090 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:39.169000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:26:39.189000 audit[3537]: AVC avc: denied { write } for pid=3537 comm="tee" name="fd" dev="proc" ino=24108 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:26:39.189000 audit[3537]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe0d0059b5 a2=241 a3=1b6 items=1 ppid=3482 pid=3537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.189000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Dec 13 14:26:39.189000 audit: PATH item=0 name="/dev/fd/63" inode=24093 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:39.189000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:26:39.163000 audit[3531]: AVC avc: denied { write } for pid=3531 comm="tee" name="fd" dev="proc" ino=24110 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:26:39.163000 audit[3531]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffa51849c5 a2=241 a3=1b6 items=1 ppid=3485 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.163000 audit: CWD cwd="/etc/service/enabled/bird/log" Dec 13 14:26:39.163000 audit: PATH item=0 name="/dev/fd/63" inode=25234 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:39.163000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:26:39.197000 audit[3546]: AVC avc: denied { write } for pid=3546 comm="tee" name="fd" dev="proc" ino=24115 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:26:39.197000 audit[3546]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeb15af9c6 a2=241 a3=1b6 items=1 ppid=3492 pid=3546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.197000 audit: CWD cwd="/etc/service/enabled/cni/log" Dec 13 14:26:39.197000 audit: PATH item=0 name="/dev/fd/63" inode=24100 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:39.197000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:26:39.207000 audit[3553]: AVC avc: denied { write } for pid=3553 comm="tee" name="fd" dev="proc" ino=24119 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:26:39.207000 audit[3553]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd295699c4 a2=241 a3=1b6 items=1 ppid=3481 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.207000 audit: CWD cwd="/etc/service/enabled/confd/log" Dec 13 14:26:39.207000 audit: PATH item=0 name="/dev/fd/63" inode=24107 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:39.207000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:26:39.490000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.490000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.490000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.490000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.490000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.490000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.490000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.490000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.490000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.490000 audit: BPF prog-id=10 op=LOAD Dec 13 14:26:39.490000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff56e5c7e0 a2=98 a3=3 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.490000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.492000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit: BPF prog-id=11 op=LOAD Dec 13 14:26:39.492000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff56e5c5c0 a2=74 a3=540051 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.492000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.492000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.492000 audit: BPF prog-id=12 op=LOAD Dec 13 14:26:39.492000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff56e5c5f0 a2=94 a3=2 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.492000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.492000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:26:39.663000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.663000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.663000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.663000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.663000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.663000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.663000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.663000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.663000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.663000 audit: BPF prog-id=13 op=LOAD Dec 13 14:26:39.663000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff56e5c4b0 a2=40 a3=1 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.663000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.663000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:26:39.663000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.663000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff56e5c580 a2=50 a3=7fff56e5c660 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.663000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.675000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.675000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff56e5c4c0 a2=28 a3=0 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.675000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.675000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff56e5c4f0 a2=28 a3=0 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.675000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.675000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff56e5c400 a2=28 a3=0 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.675000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.675000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff56e5c510 a2=28 a3=0 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.675000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.675000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff56e5c4f0 a2=28 a3=0 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff56e5c4e0 a2=28 a3=0 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff56e5c510 a2=28 a3=0 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff56e5c4f0 a2=28 a3=0 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff56e5c510 a2=28 a3=0 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff56e5c4e0 a2=28 a3=0 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff56e5c550 a2=28 a3=0 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff56e5c300 a2=50 a3=1 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit: BPF prog-id=14 op=LOAD Dec 13 14:26:39.676000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff56e5c300 a2=94 a3=5 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.676000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff56e5c3b0 a2=50 a3=1 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff56e5c4d0 a2=4 a3=38 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.676000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:26:39.676000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff56e5c520 a2=94 a3=6 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:26:39.677000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff56e5bcd0 a2=94 a3=83 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.677000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { perfmon } for pid=3568 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { bpf } for pid=3568 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.677000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:26:39.677000 audit[3568]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff56e5bcd0 a2=94 a3=83 items=0 ppid=3476 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.677000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { bpf } for pid=3587 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { bpf } for pid=3587 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { perfmon } for pid=3587 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { perfmon } for pid=3587 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { perfmon } for pid=3587 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { perfmon } for pid=3587 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { perfmon } for pid=3587 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { bpf } for pid=3587 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { bpf } for pid=3587 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit: BPF prog-id=15 op=LOAD Dec 13 14:26:39.689000 audit[3587]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd4aa2afa0 a2=98 a3=1999999999999999 items=0 ppid=3476 pid=3587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.689000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:26:39.689000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { bpf } for pid=3587 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { bpf } for pid=3587 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { perfmon } for pid=3587 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { perfmon } for pid=3587 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { perfmon } for pid=3587 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { perfmon } for pid=3587 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { perfmon } for pid=3587 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { bpf } for pid=3587 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { bpf } for pid=3587 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit: BPF prog-id=16 op=LOAD Dec 13 14:26:39.689000 audit[3587]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd4aa2ae80 a2=74 a3=ffff items=0 ppid=3476 pid=3587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.689000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:26:39.689000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { bpf } for pid=3587 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { bpf } for pid=3587 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { perfmon } for pid=3587 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { perfmon } for pid=3587 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { perfmon } for pid=3587 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { perfmon } for pid=3587 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { perfmon } for pid=3587 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { bpf } for pid=3587 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit[3587]: AVC avc: denied { bpf } for pid=3587 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.689000 audit: BPF prog-id=17 op=LOAD Dec 13 14:26:39.689000 audit[3587]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd4aa2aec0 a2=40 a3=7ffd4aa2b0a0 items=0 ppid=3476 pid=3587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.689000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:26:39.690000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:26:39.782696 systemd-networkd[1078]: vxlan.calico: Link UP Dec 13 14:26:39.782708 systemd-networkd[1078]: vxlan.calico: Gained carrier Dec 13 14:26:39.815000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.815000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.815000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.815000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.815000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.815000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.815000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.815000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.815000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.815000 audit: BPF prog-id=18 op=LOAD Dec 13 14:26:39.815000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce3572450 a2=98 a3=ffffffff items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.815000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.815000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit: BPF prog-id=19 op=LOAD Dec 13 14:26:39.816000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce3572260 a2=74 a3=540051 items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.816000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.816000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit: BPF prog-id=20 op=LOAD Dec 13 14:26:39.816000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce3572290 a2=94 a3=2 items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.816000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.816000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffce3572160 a2=28 a3=0 items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.816000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffce3572190 a2=28 a3=0 items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.816000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffce35720a0 a2=28 a3=0 items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.816000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffce35721b0 a2=28 a3=0 items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.816000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffce3572190 a2=28 a3=0 items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.816000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffce3572180 a2=28 a3=0 items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.816000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffce35721b0 a2=28 a3=0 items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.816000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffce3572190 a2=28 a3=0 items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.816000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.816000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.816000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffce35721b0 a2=28 a3=0 items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.816000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.817000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.817000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffce3572180 a2=28 a3=0 items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.817000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.817000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.817000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffce35721f0 a2=28 a3=0 items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.817000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.817000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.817000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.817000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.817000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.817000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.817000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.817000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.817000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.817000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.817000 audit: BPF prog-id=21 op=LOAD Dec 13 14:26:39.817000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffce3572060 a2=40 a3=0 items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.817000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.817000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffce3572050 a2=50 a3=2800 items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.818000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffce3572050 a2=50 a3=2800 items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.818000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit: BPF prog-id=22 op=LOAD Dec 13 14:26:39.818000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffce3571870 a2=94 a3=2 items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.818000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.818000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { perfmon } for pid=3615 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit[3615]: AVC avc: denied { bpf } for pid=3615 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.818000 audit: BPF prog-id=23 op=LOAD Dec 13 14:26:39.818000 audit[3615]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffce3571970 a2=94 a3=2d items=0 ppid=3476 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.818000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:26:39.823000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.823000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.823000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.823000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.823000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.823000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.823000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.823000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.823000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.823000 audit: BPF prog-id=24 op=LOAD Dec 13 14:26:39.823000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb71f2120 a2=98 a3=0 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.823000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.824000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit: BPF prog-id=25 op=LOAD Dec 13 14:26:39.824000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeb71f1f00 a2=74 a3=540051 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.824000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.824000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.824000 audit: BPF prog-id=26 op=LOAD Dec 13 14:26:39.824000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeb71f1f30 a2=94 a3=2 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.824000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.824000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:26:39.978000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.978000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.978000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.978000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.978000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.978000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.978000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.978000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.978000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.978000 audit: BPF prog-id=27 op=LOAD Dec 13 14:26:39.978000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeb71f1df0 a2=40 a3=1 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.978000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.978000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:26:39.978000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.978000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffeb71f1ec0 a2=50 a3=7ffeb71f1fa0 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.978000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.990000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.990000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffeb71f1e00 a2=28 a3=0 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.990000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.990000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.990000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeb71f1e30 a2=28 a3=0 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.990000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.990000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.990000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeb71f1d40 a2=28 a3=0 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.990000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.990000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.990000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffeb71f1e50 a2=28 a3=0 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.990000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.990000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.990000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffeb71f1e30 a2=28 a3=0 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.990000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.990000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.990000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffeb71f1e20 a2=28 a3=0 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.990000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.990000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.990000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffeb71f1e50 a2=28 a3=0 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.990000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.990000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.990000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeb71f1e30 a2=28 a3=0 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.990000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.990000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.990000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeb71f1e50 a2=28 a3=0 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.990000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.990000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.990000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeb71f1e20 a2=28 a3=0 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.990000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffeb71f1e90 a2=28 a3=0 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.991000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffeb71f1c40 a2=50 a3=1 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.991000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit: BPF prog-id=28 op=LOAD Dec 13 14:26:39.991000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb71f1c40 a2=94 a3=5 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.991000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.991000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffeb71f1cf0 a2=50 a3=1 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.991000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffeb71f1e10 a2=4 a3=38 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.991000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.991000 audit[3617]: AVC avc: denied { confidentiality } for pid=3617 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:26:39.991000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffeb71f1e60 a2=94 a3=6 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.991000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { confidentiality } for pid=3617 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:26:39.992000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffeb71f1610 a2=94 a3=83 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.992000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { perfmon } for pid=3617 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { confidentiality } for pid=3617 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:26:39.992000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffeb71f1610 a2=94 a3=83 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.992000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.992000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.992000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffeb71f3050 a2=10 a3=f1f00800 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.992000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.993000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.993000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffeb71f2ef0 a2=10 a3=3 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.993000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.993000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.993000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffeb71f2e90 a2=10 a3=3 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.993000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:39.993000 audit[3617]: AVC avc: denied { bpf } for pid=3617 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:26:39.993000 audit[3617]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffeb71f2e90 a2=10 a3=7 items=0 ppid=3476 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:39.993000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:26:40.002000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:26:40.107000 audit[3647]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=3647 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:26:40.107000 audit[3647]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd6648d310 a2=0 a3=7ffd6648d2fc items=0 ppid=3476 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:40.107000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:26:40.123000 audit[3646]: NETFILTER_CFG table=nat:98 family=2 entries=15 op=nft_register_chain pid=3646 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:26:40.123000 audit[3646]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc0f193160 a2=0 a3=7ffc0f19314c items=0 ppid=3476 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:40.123000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:26:40.126000 audit[3648]: NETFILTER_CFG table=filter:99 family=2 entries=39 op=nft_register_chain pid=3648 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:26:40.126000 audit[3648]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7fff710705c0 a2=0 a3=7fff710705ac items=0 ppid=3476 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:40.126000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:26:40.131000 audit[3645]: NETFILTER_CFG table=raw:100 family=2 entries=21 op=nft_register_chain pid=3645 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:26:40.131000 audit[3645]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fffeab1a090 a2=0 a3=7fffeab1a07c items=0 ppid=3476 pid=3645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:40.131000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:26:41.272163 env[1331]: time="2024-12-13T14:26:41.272102832Z" level=info msg="StopPodSandbox for \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\"" Dec 13 14:26:41.305450 systemd-networkd[1078]: vxlan.calico: Gained IPv6LL Dec 13 14:26:41.381536 env[1331]: 2024-12-13 14:26:41.336 [INFO][3674] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Dec 13 14:26:41.381536 env[1331]: 2024-12-13 14:26:41.336 [INFO][3674] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" iface="eth0" netns="/var/run/netns/cni-4a325ae5-a70b-aba3-8fdf-7ddd8e0caecf" Dec 13 14:26:41.381536 env[1331]: 2024-12-13 14:26:41.336 [INFO][3674] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" iface="eth0" netns="/var/run/netns/cni-4a325ae5-a70b-aba3-8fdf-7ddd8e0caecf" Dec 13 14:26:41.381536 env[1331]: 2024-12-13 14:26:41.337 [INFO][3674] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" iface="eth0" netns="/var/run/netns/cni-4a325ae5-a70b-aba3-8fdf-7ddd8e0caecf" Dec 13 14:26:41.381536 env[1331]: 2024-12-13 14:26:41.337 [INFO][3674] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Dec 13 14:26:41.381536 env[1331]: 2024-12-13 14:26:41.337 [INFO][3674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Dec 13 14:26:41.381536 env[1331]: 2024-12-13 14:26:41.368 [INFO][3681] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" HandleID="k8s-pod-network.7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" Dec 13 14:26:41.381536 env[1331]: 2024-12-13 14:26:41.368 [INFO][3681] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:41.381536 env[1331]: 2024-12-13 14:26:41.368 [INFO][3681] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:41.381536 env[1331]: 2024-12-13 14:26:41.375 [WARNING][3681] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" HandleID="k8s-pod-network.7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" Dec 13 14:26:41.381536 env[1331]: 2024-12-13 14:26:41.375 [INFO][3681] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" HandleID="k8s-pod-network.7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" Dec 13 14:26:41.381536 env[1331]: 2024-12-13 14:26:41.377 [INFO][3681] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:41.381536 env[1331]: 2024-12-13 14:26:41.379 [INFO][3674] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Dec 13 14:26:41.382804 env[1331]: time="2024-12-13T14:26:41.382748821Z" level=info msg="TearDown network for sandbox \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\" successfully" Dec 13 14:26:41.382990 env[1331]: time="2024-12-13T14:26:41.382960639Z" level=info msg="StopPodSandbox for \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\" returns successfully" Dec 13 14:26:41.387534 systemd[1]: run-netns-cni\x2d4a325ae5\x2da70b\x2daba3\x2d8fdf\x2d7ddd8e0caecf.mount: Deactivated successfully. Dec 13 14:26:41.390995 env[1331]: time="2024-12-13T14:26:41.390945733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c579667c-4vgnc,Uid:c2a62712-176a-4d0f-9d03-df8cafed69c7,Namespace:calico-system,Attempt:1,}" Dec 13 14:26:41.550562 systemd-networkd[1078]: cali1e846fa20f2: Link UP Dec 13 14:26:41.564460 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:26:41.564649 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1e846fa20f2: link becomes ready Dec 13 14:26:41.568203 systemd-networkd[1078]: cali1e846fa20f2: Gained carrier Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.463 [INFO][3687] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0 calico-kube-controllers-8c579667c- calico-system c2a62712-176a-4d0f-9d03-df8cafed69c7 793 0 2024-12-13 14:26:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8c579667c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal calico-kube-controllers-8c579667c-4vgnc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1e846fa20f2 [] []}} ContainerID="e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" Namespace="calico-system" Pod="calico-kube-controllers-8c579667c-4vgnc" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-" Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.464 [INFO][3687] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" Namespace="calico-system" Pod="calico-kube-controllers-8c579667c-4vgnc" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.503 [INFO][3699] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" HandleID="k8s-pod-network.e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.514 [INFO][3699] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" HandleID="k8s-pod-network.e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ed7f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", "pod":"calico-kube-controllers-8c579667c-4vgnc", "timestamp":"2024-12-13 14:26:41.503664733 +0000 UTC"}, Hostname:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.514 [INFO][3699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.514 [INFO][3699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.514 [INFO][3699] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal' Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.516 [INFO][3699] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.523 [INFO][3699] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.528 [INFO][3699] ipam/ipam.go 489: Trying affinity for 192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.529 [INFO][3699] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.532 [INFO][3699] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.532 [INFO][3699] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.64/26 handle="k8s-pod-network.e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.533 [INFO][3699] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2 Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.538 [INFO][3699] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.64/26 handle="k8s-pod-network.e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.545 [INFO][3699] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.65/26] block=192.168.105.64/26 handle="k8s-pod-network.e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.545 [INFO][3699] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.65/26] handle="k8s-pod-network.e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.545 [INFO][3699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:41.591142 env[1331]: 2024-12-13 14:26:41.545 [INFO][3699] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.65/26] IPv6=[] ContainerID="e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" HandleID="k8s-pod-network.e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" Dec 13 14:26:41.592485 env[1331]: 2024-12-13 14:26:41.547 [INFO][3687] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" Namespace="calico-system" Pod="calico-kube-controllers-8c579667c-4vgnc" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0", GenerateName:"calico-kube-controllers-8c579667c-", Namespace:"calico-system", SelfLink:"", UID:"c2a62712-176a-4d0f-9d03-df8cafed69c7", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8c579667c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-8c579667c-4vgnc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1e846fa20f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:41.592485 env[1331]: 2024-12-13 14:26:41.547 [INFO][3687] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.65/32] ContainerID="e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" Namespace="calico-system" Pod="calico-kube-controllers-8c579667c-4vgnc" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" Dec 13 14:26:41.592485 env[1331]: 2024-12-13 14:26:41.547 [INFO][3687] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e846fa20f2 ContainerID="e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" Namespace="calico-system" Pod="calico-kube-controllers-8c579667c-4vgnc" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" Dec 13 14:26:41.592485 env[1331]: 2024-12-13 14:26:41.569 [INFO][3687] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" Namespace="calico-system" Pod="calico-kube-controllers-8c579667c-4vgnc" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" Dec 13 14:26:41.592485 env[1331]: 2024-12-13 14:26:41.569 [INFO][3687] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" Namespace="calico-system" Pod="calico-kube-controllers-8c579667c-4vgnc" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0", GenerateName:"calico-kube-controllers-8c579667c-", Namespace:"calico-system", SelfLink:"", UID:"c2a62712-176a-4d0f-9d03-df8cafed69c7", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8c579667c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2", Pod:"calico-kube-controllers-8c579667c-4vgnc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1e846fa20f2", MAC:"0a:23:0b:33:7b:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:41.592485 env[1331]: 2024-12-13 14:26:41.585 [INFO][3687] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2" Namespace="calico-system" Pod="calico-kube-controllers-8c579667c-4vgnc" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" Dec 13 14:26:41.621847 env[1331]: time="2024-12-13T14:26:41.621697953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:41.622153 env[1331]: time="2024-12-13T14:26:41.621812270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:41.622440 env[1331]: time="2024-12-13T14:26:41.622296551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:41.623019 env[1331]: time="2024-12-13T14:26:41.622946750Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2 pid=3725 runtime=io.containerd.runc.v2 Dec 13 14:26:41.627000 audit[3731]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=3731 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:26:41.627000 audit[3731]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffcb7c2ae20 a2=0 a3=7ffcb7c2ae0c items=0 ppid=3476 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:41.627000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:26:41.749617 env[1331]: time="2024-12-13T14:26:41.749560791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c579667c-4vgnc,Uid:c2a62712-176a-4d0f-9d03-df8cafed69c7,Namespace:calico-system,Attempt:1,} returns sandbox id \"e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2\"" Dec 13 14:26:41.754082 env[1331]: time="2024-12-13T14:26:41.754021164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 14:26:42.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.74:22-137.184.27.180:53286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:42.080748 systemd[1]: Started sshd@9-10.128.0.74:22-137.184.27.180:53286.service. Dec 13 14:26:42.273581 env[1331]: time="2024-12-13T14:26:42.273511359Z" level=info msg="StopPodSandbox for \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\"" Dec 13 14:26:42.276084 env[1331]: time="2024-12-13T14:26:42.276038482Z" level=info msg="StopPodSandbox for \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\"" Dec 13 14:26:42.277343 env[1331]: time="2024-12-13T14:26:42.273511455Z" level=info msg="StopPodSandbox for \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\"" Dec 13 14:26:42.359679 sshd[3761]: Invalid user mma from 137.184.27.180 port 53286 Dec 13 14:26:42.405558 kernel: kauditd_printk_skb: 515 callbacks suppressed Dec 13 14:26:42.405733 kernel: audit: type=1100 audit(1734100002.374:390): pid=3761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="mma" exe="/usr/sbin/sshd" hostname=137.184.27.180 addr=137.184.27.180 terminal=ssh res=failed' Dec 13 14:26:42.374000 audit[3761]: USER_AUTH pid=3761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="mma" exe="/usr/sbin/sshd" hostname=137.184.27.180 addr=137.184.27.180 terminal=ssh res=failed' Dec 13 14:26:42.405918 sshd[3761]: Failed password for invalid user mma from 137.184.27.180 port 53286 ssh2 Dec 13 14:26:42.418492 sshd[3761]: Received disconnect from 137.184.27.180 port 53286:11: Bye Bye [preauth] Dec 13 14:26:42.418492 sshd[3761]: Disconnected from invalid user mma 137.184.27.180 port 53286 [preauth] Dec 13 14:26:42.425833 systemd[1]: sshd@9-10.128.0.74:22-137.184.27.180:53286.service: Deactivated successfully. Dec 13 14:26:42.452020 kernel: audit: type=1131 audit(1734100002.425:391): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.74:22-137.184.27.180:53286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:42.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.74:22-137.184.27.180:53286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:42.570300 env[1331]: 2024-12-13 14:26:42.491 [INFO][3810] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Dec 13 14:26:42.570300 env[1331]: 2024-12-13 14:26:42.492 [INFO][3810] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" iface="eth0" netns="/var/run/netns/cni-c85b9b20-076d-294c-a5d0-00ec56426e20" Dec 13 14:26:42.570300 env[1331]: 2024-12-13 14:26:42.492 [INFO][3810] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" iface="eth0" netns="/var/run/netns/cni-c85b9b20-076d-294c-a5d0-00ec56426e20" Dec 13 14:26:42.570300 env[1331]: 2024-12-13 14:26:42.493 [INFO][3810] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" iface="eth0" netns="/var/run/netns/cni-c85b9b20-076d-294c-a5d0-00ec56426e20" Dec 13 14:26:42.570300 env[1331]: 2024-12-13 14:26:42.494 [INFO][3810] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Dec 13 14:26:42.570300 env[1331]: 2024-12-13 14:26:42.494 [INFO][3810] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Dec 13 14:26:42.570300 env[1331]: 2024-12-13 14:26:42.543 [INFO][3832] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" HandleID="k8s-pod-network.b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" Dec 13 14:26:42.570300 env[1331]: 2024-12-13 14:26:42.544 [INFO][3832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:42.570300 env[1331]: 2024-12-13 14:26:42.544 [INFO][3832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:42.570300 env[1331]: 2024-12-13 14:26:42.558 [WARNING][3832] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" HandleID="k8s-pod-network.b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" Dec 13 14:26:42.570300 env[1331]: 2024-12-13 14:26:42.559 [INFO][3832] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" HandleID="k8s-pod-network.b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" Dec 13 14:26:42.570300 env[1331]: 2024-12-13 14:26:42.561 [INFO][3832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:42.570300 env[1331]: 2024-12-13 14:26:42.567 [INFO][3810] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Dec 13 14:26:42.580619 env[1331]: time="2024-12-13T14:26:42.580510810Z" level=info msg="TearDown network for sandbox \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\" successfully" Dec 13 14:26:42.580874 env[1331]: time="2024-12-13T14:26:42.580839425Z" level=info msg="StopPodSandbox for \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\" returns successfully" Dec 13 14:26:42.580943 systemd[1]: run-netns-cni\x2dc85b9b20\x2d076d\x2d294c\x2da5d0\x2d00ec56426e20.mount: Deactivated successfully. Dec 13 14:26:42.584862 env[1331]: time="2024-12-13T14:26:42.584785742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67448fdc7d-mzvdz,Uid:344bd3be-fc59-4092-b9f4-59fe040c7639,Namespace:calico-apiserver,Attempt:1,}" Dec 13 14:26:42.599914 env[1331]: 2024-12-13 14:26:42.452 [INFO][3808] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Dec 13 14:26:42.599914 env[1331]: 2024-12-13 14:26:42.453 [INFO][3808] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" iface="eth0" netns="/var/run/netns/cni-43908093-906e-40fa-b67f-9bb8a7a1abe8" Dec 13 14:26:42.599914 env[1331]: 2024-12-13 14:26:42.453 [INFO][3808] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" iface="eth0" netns="/var/run/netns/cni-43908093-906e-40fa-b67f-9bb8a7a1abe8" Dec 13 14:26:42.599914 env[1331]: 2024-12-13 14:26:42.454 [INFO][3808] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" iface="eth0" netns="/var/run/netns/cni-43908093-906e-40fa-b67f-9bb8a7a1abe8" Dec 13 14:26:42.599914 env[1331]: 2024-12-13 14:26:42.454 [INFO][3808] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Dec 13 14:26:42.599914 env[1331]: 2024-12-13 14:26:42.454 [INFO][3808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Dec 13 14:26:42.599914 env[1331]: 2024-12-13 14:26:42.550 [INFO][3827] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" HandleID="k8s-pod-network.e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" Dec 13 14:26:42.599914 env[1331]: 2024-12-13 14:26:42.550 [INFO][3827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:42.599914 env[1331]: 2024-12-13 14:26:42.561 [INFO][3827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:42.599914 env[1331]: 2024-12-13 14:26:42.587 [WARNING][3827] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" HandleID="k8s-pod-network.e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" Dec 13 14:26:42.599914 env[1331]: 2024-12-13 14:26:42.588 [INFO][3827] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" HandleID="k8s-pod-network.e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" Dec 13 14:26:42.599914 env[1331]: 2024-12-13 14:26:42.590 [INFO][3827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:42.599914 env[1331]: 2024-12-13 14:26:42.597 [INFO][3808] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Dec 13 14:26:42.604716 env[1331]: time="2024-12-13T14:26:42.604624680Z" level=info msg="TearDown network for sandbox \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\" successfully" Dec 13 14:26:42.605714 env[1331]: time="2024-12-13T14:26:42.605621525Z" level=info msg="StopPodSandbox for \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\" returns successfully" Dec 13 14:26:42.608577 env[1331]: time="2024-12-13T14:26:42.608539430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67448fdc7d-2zp8b,Uid:35d24807-89c8-42de-a1a0-7e24511228d9,Namespace:calico-apiserver,Attempt:1,}" Dec 13 14:26:42.610307 systemd[1]: run-netns-cni\x2d43908093\x2d906e\x2d40fa\x2db67f\x2d9bb8a7a1abe8.mount: Deactivated successfully. Dec 13 14:26:42.648984 systemd-networkd[1078]: cali1e846fa20f2: Gained IPv6LL Dec 13 14:26:42.688959 env[1331]: 2024-12-13 14:26:42.507 [INFO][3809] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Dec 13 14:26:42.688959 env[1331]: 2024-12-13 14:26:42.508 [INFO][3809] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" iface="eth0" netns="/var/run/netns/cni-e5470725-8dde-f707-8654-9a9475e8f35e" Dec 13 14:26:42.688959 env[1331]: 2024-12-13 14:26:42.508 [INFO][3809] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" iface="eth0" netns="/var/run/netns/cni-e5470725-8dde-f707-8654-9a9475e8f35e" Dec 13 14:26:42.688959 env[1331]: 2024-12-13 14:26:42.509 [INFO][3809] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" iface="eth0" netns="/var/run/netns/cni-e5470725-8dde-f707-8654-9a9475e8f35e" Dec 13 14:26:42.688959 env[1331]: 2024-12-13 14:26:42.509 [INFO][3809] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Dec 13 14:26:42.688959 env[1331]: 2024-12-13 14:26:42.509 [INFO][3809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Dec 13 14:26:42.688959 env[1331]: 2024-12-13 14:26:42.662 [INFO][3837] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" HandleID="k8s-pod-network.7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" Dec 13 14:26:42.688959 env[1331]: 2024-12-13 14:26:42.663 [INFO][3837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:42.688959 env[1331]: 2024-12-13 14:26:42.663 [INFO][3837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:42.688959 env[1331]: 2024-12-13 14:26:42.682 [WARNING][3837] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" HandleID="k8s-pod-network.7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" Dec 13 14:26:42.688959 env[1331]: 2024-12-13 14:26:42.682 [INFO][3837] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" HandleID="k8s-pod-network.7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" Dec 13 14:26:42.688959 env[1331]: 2024-12-13 14:26:42.685 [INFO][3837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:42.688959 env[1331]: 2024-12-13 14:26:42.687 [INFO][3809] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Dec 13 14:26:42.690092 env[1331]: time="2024-12-13T14:26:42.690019160Z" level=info msg="TearDown network for sandbox \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\" successfully" Dec 13 14:26:42.690291 env[1331]: time="2024-12-13T14:26:42.690259868Z" level=info msg="StopPodSandbox for \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\" returns successfully" Dec 13 14:26:42.691413 env[1331]: time="2024-12-13T14:26:42.691328121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7f6gg,Uid:e8fdff54-c28c-49b3-874a-502acca12325,Namespace:kube-system,Attempt:1,}" Dec 13 14:26:42.944558 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:26:42.944700 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib9e18a47052: link becomes ready Dec 13 14:26:42.933630 systemd-networkd[1078]: calib9e18a47052: Link UP Dec 13 14:26:42.958403 systemd-networkd[1078]: calib9e18a47052: Gained carrier Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.775 [INFO][3848] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0 calico-apiserver-67448fdc7d- calico-apiserver 344bd3be-fc59-4092-b9f4-59fe040c7639 805 0 2024-12-13 14:26:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67448fdc7d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal calico-apiserver-67448fdc7d-mzvdz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib9e18a47052 [] []}} ContainerID="e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" Namespace="calico-apiserver" Pod="calico-apiserver-67448fdc7d-mzvdz" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-" Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.776 [INFO][3848] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" Namespace="calico-apiserver" Pod="calico-apiserver-67448fdc7d-mzvdz" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.847 [INFO][3884] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" HandleID="k8s-pod-network.e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.864 [INFO][3884] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" HandleID="k8s-pod-network.e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000361430), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", "pod":"calico-apiserver-67448fdc7d-mzvdz", "timestamp":"2024-12-13 14:26:42.847295024 +0000 UTC"}, Hostname:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.864 [INFO][3884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.864 [INFO][3884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.864 [INFO][3884] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal' Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.867 [INFO][3884] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.873 [INFO][3884] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.879 [INFO][3884] ipam/ipam.go 489: Trying affinity for 192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.883 [INFO][3884] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.887 [INFO][3884] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.887 [INFO][3884] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.64/26 handle="k8s-pod-network.e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.889 [INFO][3884] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.899 [INFO][3884] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.64/26 handle="k8s-pod-network.e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.908 [INFO][3884] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.66/26] block=192.168.105.64/26 handle="k8s-pod-network.e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.908 [INFO][3884] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.66/26] handle="k8s-pod-network.e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.908 [INFO][3884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:42.982100 env[1331]: 2024-12-13 14:26:42.908 [INFO][3884] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.66/26] IPv6=[] ContainerID="e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" HandleID="k8s-pod-network.e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" Dec 13 14:26:42.984145 env[1331]: 2024-12-13 14:26:42.911 [INFO][3848] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" Namespace="calico-apiserver" Pod="calico-apiserver-67448fdc7d-mzvdz" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0", GenerateName:"calico-apiserver-67448fdc7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"344bd3be-fc59-4092-b9f4-59fe040c7639", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67448fdc7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-67448fdc7d-mzvdz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9e18a47052", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:42.984145 env[1331]: 2024-12-13 14:26:42.911 [INFO][3848] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.66/32] ContainerID="e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" Namespace="calico-apiserver" Pod="calico-apiserver-67448fdc7d-mzvdz" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" Dec 13 14:26:42.984145 env[1331]: 2024-12-13 14:26:42.911 [INFO][3848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9e18a47052 ContainerID="e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" Namespace="calico-apiserver" Pod="calico-apiserver-67448fdc7d-mzvdz" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" Dec 13 14:26:42.984145 env[1331]: 2024-12-13 14:26:42.962 [INFO][3848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" Namespace="calico-apiserver" Pod="calico-apiserver-67448fdc7d-mzvdz" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" Dec 13 14:26:42.984145 env[1331]: 2024-12-13 14:26:42.963 [INFO][3848] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" Namespace="calico-apiserver" Pod="calico-apiserver-67448fdc7d-mzvdz" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0", GenerateName:"calico-apiserver-67448fdc7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"344bd3be-fc59-4092-b9f4-59fe040c7639", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67448fdc7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b", Pod:"calico-apiserver-67448fdc7d-mzvdz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9e18a47052", MAC:"46:ee:1e:94:42:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:42.984145 env[1331]: 2024-12-13 14:26:42.976 [INFO][3848] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b" Namespace="calico-apiserver" Pod="calico-apiserver-67448fdc7d-mzvdz" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" Dec 13 14:26:43.013000 audit[3910]: NETFILTER_CFG table=filter:102 family=2 entries=44 op=nft_register_chain pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:26:43.031434 kernel: audit: type=1325 audit(1734100003.013:392): table=filter:102 family=2 entries=44 op=nft_register_chain pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:26:43.013000 audit[3910]: SYSCALL arch=c000003e syscall=46 success=yes exit=24680 a0=3 a1=7ffeb1166200 a2=0 a3=7ffeb11661ec items=0 ppid=3476 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:43.074409 kernel: audit: type=1300 audit(1734100003.013:392): arch=c000003e syscall=46 success=yes exit=24680 a0=3 a1=7ffeb1166200 a2=0 a3=7ffeb11661ec items=0 ppid=3476 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:43.013000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:26:43.121410 kernel: audit: type=1327 audit(1734100003.013:392): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:26:43.142795 env[1331]: time="2024-12-13T14:26:43.142640041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:43.142795 env[1331]: time="2024-12-13T14:26:43.142731214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:43.142795 env[1331]: time="2024-12-13T14:26:43.142749866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:43.143442 env[1331]: time="2024-12-13T14:26:43.143343613Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b pid=3920 runtime=io.containerd.runc.v2 Dec 13 14:26:43.198817 systemd-networkd[1078]: cali2d8f7ae0dab: Link UP Dec 13 14:26:43.211139 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2d8f7ae0dab: link becomes ready Dec 13 14:26:43.213087 systemd-networkd[1078]: cali2d8f7ae0dab: Gained carrier Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:42.782 [INFO][3858] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0 calico-apiserver-67448fdc7d- calico-apiserver 35d24807-89c8-42de-a1a0-7e24511228d9 804 0 2024-12-13 14:26:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67448fdc7d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal calico-apiserver-67448fdc7d-2zp8b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2d8f7ae0dab [] []}} ContainerID="408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" Namespace="calico-apiserver" Pod="calico-apiserver-67448fdc7d-2zp8b" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-" Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:42.782 [INFO][3858] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" Namespace="calico-apiserver" Pod="calico-apiserver-67448fdc7d-2zp8b" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:43.076 [INFO][3885] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" HandleID="k8s-pod-network.408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:43.093 [INFO][3885] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" HandleID="k8s-pod-network.408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e5330), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", "pod":"calico-apiserver-67448fdc7d-2zp8b", "timestamp":"2024-12-13 14:26:43.075834913 +0000 UTC"}, Hostname:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:43.093 [INFO][3885] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:43.093 [INFO][3885] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:43.093 [INFO][3885] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal' Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:43.095 [INFO][3885] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:43.100 [INFO][3885] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:43.125 [INFO][3885] ipam/ipam.go 489: Trying affinity for 192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:43.128 [INFO][3885] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:43.132 [INFO][3885] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:43.132 [INFO][3885] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.64/26 handle="k8s-pod-network.408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:43.135 [INFO][3885] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:43.159 [INFO][3885] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.64/26 handle="k8s-pod-network.408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:43.172 [INFO][3885] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.67/26] block=192.168.105.64/26 handle="k8s-pod-network.408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:43.172 [INFO][3885] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.67/26] handle="k8s-pod-network.408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:43.172 [INFO][3885] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:43.239052 env[1331]: 2024-12-13 14:26:43.172 [INFO][3885] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.67/26] IPv6=[] ContainerID="408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" HandleID="k8s-pod-network.408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" Dec 13 14:26:43.240898 env[1331]: 2024-12-13 14:26:43.175 [INFO][3858] cni-plugin/k8s.go 386: Populated endpoint ContainerID="408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" Namespace="calico-apiserver" Pod="calico-apiserver-67448fdc7d-2zp8b" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0", GenerateName:"calico-apiserver-67448fdc7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"35d24807-89c8-42de-a1a0-7e24511228d9", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67448fdc7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-67448fdc7d-2zp8b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d8f7ae0dab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:43.240898 env[1331]: 2024-12-13 14:26:43.175 [INFO][3858] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.67/32] ContainerID="408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" Namespace="calico-apiserver" Pod="calico-apiserver-67448fdc7d-2zp8b" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" Dec 13 14:26:43.240898 env[1331]: 2024-12-13 14:26:43.176 [INFO][3858] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d8f7ae0dab ContainerID="408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" Namespace="calico-apiserver" Pod="calico-apiserver-67448fdc7d-2zp8b" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" Dec 13 14:26:43.240898 env[1331]: 2024-12-13 14:26:43.214 [INFO][3858] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" Namespace="calico-apiserver" Pod="calico-apiserver-67448fdc7d-2zp8b" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" Dec 13 14:26:43.240898 env[1331]: 2024-12-13 14:26:43.215 [INFO][3858] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" Namespace="calico-apiserver" Pod="calico-apiserver-67448fdc7d-2zp8b" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0", GenerateName:"calico-apiserver-67448fdc7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"35d24807-89c8-42de-a1a0-7e24511228d9", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67448fdc7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c", Pod:"calico-apiserver-67448fdc7d-2zp8b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d8f7ae0dab", MAC:"36:8c:ee:9a:48:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:43.240898 env[1331]: 2024-12-13 14:26:43.234 [INFO][3858] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c" Namespace="calico-apiserver" Pod="calico-apiserver-67448fdc7d-2zp8b" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" Dec 13 14:26:43.269000 audit[3952]: NETFILTER_CFG table=filter:103 family=2 entries=38 op=nft_register_chain pid=3952 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:26:43.289327 kernel: audit: type=1325 audit(1734100003.269:393): table=filter:103 family=2 entries=38 op=nft_register_chain pid=3952 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:26:43.330214 kernel: audit: type=1300 audit(1734100003.269:393): arch=c000003e syscall=46 success=yes exit=21516 a0=3 a1=7ffd92089c70 a2=0 a3=7ffd92089c5c items=0 ppid=3476 pid=3952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:43.269000 audit[3952]: SYSCALL arch=c000003e syscall=46 success=yes exit=21516 a0=3 a1=7ffd92089c70 a2=0 a3=7ffd92089c5c items=0 ppid=3476 pid=3952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:43.330591 env[1331]: time="2024-12-13T14:26:43.302729147Z" level=info msg="StopPodSandbox for \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\"" Dec 13 14:26:43.330591 env[1331]: time="2024-12-13T14:26:43.303170347Z" level=info msg="StopPodSandbox for \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\"" Dec 13 14:26:43.335200 systemd-networkd[1078]: cali85f4bb9fd58: Link UP Dec 13 14:26:43.348197 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali85f4bb9fd58: link becomes ready Dec 13 14:26:43.347286 systemd-networkd[1078]: cali85f4bb9fd58: Gained carrier Dec 13 14:26:43.269000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:26:43.385444 kernel: audit: type=1327 audit(1734100003.269:393): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:42.845 [INFO][3871] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0 coredns-76f75df574- kube-system e8fdff54-c28c-49b3-874a-502acca12325 806 0 2024-12-13 14:26:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal coredns-76f75df574-7f6gg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali85f4bb9fd58 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" Namespace="kube-system" Pod="coredns-76f75df574-7f6gg" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-" Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:42.845 [INFO][3871] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" Namespace="kube-system" Pod="coredns-76f75df574-7f6gg" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:43.137 [INFO][3896] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" HandleID="k8s-pod-network.c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:43.158 [INFO][3896] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" HandleID="k8s-pod-network.c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001feb50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", "pod":"coredns-76f75df574-7f6gg", "timestamp":"2024-12-13 14:26:43.1379424 +0000 UTC"}, Hostname:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:43.158 [INFO][3896] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:43.173 [INFO][3896] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:43.173 [INFO][3896] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal' Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:43.177 [INFO][3896] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:43.183 [INFO][3896] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:43.218 [INFO][3896] ipam/ipam.go 489: Trying affinity for 192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:43.227 [INFO][3896] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:43.234 [INFO][3896] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:43.235 [INFO][3896] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.64/26 handle="k8s-pod-network.c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:43.243 [INFO][3896] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34 Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:43.251 [INFO][3896] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.64/26 handle="k8s-pod-network.c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:43.262 [INFO][3896] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.68/26] block=192.168.105.64/26 handle="k8s-pod-network.c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:43.263 [INFO][3896] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.68/26] handle="k8s-pod-network.c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:43.263 [INFO][3896] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:43.412778 env[1331]: 2024-12-13 14:26:43.263 [INFO][3896] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.68/26] IPv6=[] ContainerID="c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" HandleID="k8s-pod-network.c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" Dec 13 14:26:43.416138 env[1331]: 2024-12-13 14:26:43.269 [INFO][3871] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" Namespace="kube-system" Pod="coredns-76f75df574-7f6gg" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e8fdff54-c28c-49b3-874a-502acca12325", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-76f75df574-7f6gg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85f4bb9fd58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:43.416138 env[1331]: 2024-12-13 14:26:43.269 [INFO][3871] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.68/32] ContainerID="c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" Namespace="kube-system" Pod="coredns-76f75df574-7f6gg" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" Dec 13 14:26:43.416138 env[1331]: 2024-12-13 14:26:43.269 [INFO][3871] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85f4bb9fd58 ContainerID="c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" Namespace="kube-system" Pod="coredns-76f75df574-7f6gg" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" Dec 13 14:26:43.416138 env[1331]: 2024-12-13 14:26:43.350 [INFO][3871] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" Namespace="kube-system" Pod="coredns-76f75df574-7f6gg" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" Dec 13 14:26:43.416138 env[1331]: 2024-12-13 14:26:43.352 [INFO][3871] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" Namespace="kube-system" Pod="coredns-76f75df574-7f6gg" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e8fdff54-c28c-49b3-874a-502acca12325", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34", Pod:"coredns-76f75df574-7f6gg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85f4bb9fd58", MAC:"b2:ba:3c:04:52:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:43.416138 env[1331]: 2024-12-13 14:26:43.369 [INFO][3871] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34" Namespace="kube-system" Pod="coredns-76f75df574-7f6gg" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" Dec 13 14:26:43.423000 audit[3975]: NETFILTER_CFG table=filter:104 family=2 entries=46 op=nft_register_chain pid=3975 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:26:43.474549 kernel: audit: type=1325 audit(1734100003.423:394): table=filter:104 family=2 entries=46 op=nft_register_chain pid=3975 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:26:43.474730 kernel: audit: type=1300 audit(1734100003.423:394): arch=c000003e syscall=46 success=yes exit=22712 a0=3 a1=7ffd0d4115f0 a2=0 a3=7ffd0d4115dc items=0 ppid=3476 pid=3975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:43.423000 audit[3975]: SYSCALL arch=c000003e syscall=46 success=yes exit=22712 a0=3 a1=7ffd0d4115f0 a2=0 a3=7ffd0d4115dc items=0 ppid=3476 pid=3975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:43.423000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:26:43.494047 env[1331]: time="2024-12-13T14:26:43.493918005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:43.494047 env[1331]: time="2024-12-13T14:26:43.494031057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:43.494345 env[1331]: time="2024-12-13T14:26:43.494073408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:43.494439 env[1331]: time="2024-12-13T14:26:43.494329998Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c pid=4007 runtime=io.containerd.runc.v2 Dec 13 14:26:43.588817 systemd[1]: run-netns-cni\x2de5470725\x2d8dde\x2df707\x2d8654\x2d9a9475e8f35e.mount: Deactivated successfully. Dec 13 14:26:43.589374 env[1331]: time="2024-12-13T14:26:43.571038816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:43.589374 env[1331]: time="2024-12-13T14:26:43.571100109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:43.589374 env[1331]: time="2024-12-13T14:26:43.571116888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:43.590078 env[1331]: time="2024-12-13T14:26:43.590015009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67448fdc7d-mzvdz,Uid:344bd3be-fc59-4092-b9f4-59fe040c7639,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b\"" Dec 13 14:26:43.633788 env[1331]: time="2024-12-13T14:26:43.633696477Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34 pid=4044 runtime=io.containerd.runc.v2 Dec 13 14:26:43.884486 env[1331]: time="2024-12-13T14:26:43.884422343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67448fdc7d-2zp8b,Uid:35d24807-89c8-42de-a1a0-7e24511228d9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c\"" Dec 13 14:26:43.887706 env[1331]: time="2024-12-13T14:26:43.887653736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7f6gg,Uid:e8fdff54-c28c-49b3-874a-502acca12325,Namespace:kube-system,Attempt:1,} returns sandbox id \"c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34\"" Dec 13 14:26:43.893231 env[1331]: time="2024-12-13T14:26:43.893180982Z" level=info msg="CreateContainer within sandbox \"c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:26:43.943613 env[1331]: time="2024-12-13T14:26:43.943552990Z" level=info msg="CreateContainer within sandbox \"c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8be1a39714cd25487e2670fedc2a834d7e8afb57b56692b1ea6029bb6cfd0980\"" Dec 13 14:26:43.947210 env[1331]: time="2024-12-13T14:26:43.947154484Z" level=info msg="StartContainer for \"8be1a39714cd25487e2670fedc2a834d7e8afb57b56692b1ea6029bb6cfd0980\"" Dec 13 14:26:43.988660 env[1331]: 2024-12-13 14:26:43.858 [INFO][4015] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Dec 13 14:26:43.988660 env[1331]: 2024-12-13 14:26:43.859 [INFO][4015] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" iface="eth0" netns="/var/run/netns/cni-b3a4ebd7-d8f7-fcf0-52d2-a6be0360a3cb" Dec 13 14:26:43.988660 env[1331]: 2024-12-13 14:26:43.859 [INFO][4015] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" iface="eth0" netns="/var/run/netns/cni-b3a4ebd7-d8f7-fcf0-52d2-a6be0360a3cb" Dec 13 14:26:43.988660 env[1331]: 2024-12-13 14:26:43.859 [INFO][4015] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" iface="eth0" netns="/var/run/netns/cni-b3a4ebd7-d8f7-fcf0-52d2-a6be0360a3cb" Dec 13 14:26:43.988660 env[1331]: 2024-12-13 14:26:43.859 [INFO][4015] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Dec 13 14:26:43.988660 env[1331]: 2024-12-13 14:26:43.859 [INFO][4015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Dec 13 14:26:43.988660 env[1331]: 2024-12-13 14:26:43.964 [INFO][4098] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" HandleID="k8s-pod-network.3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" Dec 13 14:26:43.988660 env[1331]: 2024-12-13 14:26:43.964 [INFO][4098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:43.988660 env[1331]: 2024-12-13 14:26:43.964 [INFO][4098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:43.988660 env[1331]: 2024-12-13 14:26:43.983 [WARNING][4098] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" HandleID="k8s-pod-network.3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" Dec 13 14:26:43.988660 env[1331]: 2024-12-13 14:26:43.983 [INFO][4098] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" HandleID="k8s-pod-network.3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" Dec 13 14:26:43.988660 env[1331]: 2024-12-13 14:26:43.985 [INFO][4098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:43.988660 env[1331]: 2024-12-13 14:26:43.986 [INFO][4015] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Dec 13 14:26:43.989680 env[1331]: time="2024-12-13T14:26:43.988823647Z" level=info msg="TearDown network for sandbox \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\" successfully" Dec 13 14:26:43.989680 env[1331]: time="2024-12-13T14:26:43.988868953Z" level=info msg="StopPodSandbox for \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\" returns successfully" Dec 13 14:26:43.990812 env[1331]: time="2024-12-13T14:26:43.990754291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fcccw,Uid:d7e4fbf8-2010-40d9-9761-00fa99980147,Namespace:kube-system,Attempt:1,}" Dec 13 14:26:44.020030 env[1331]: 2024-12-13 14:26:43.816 [INFO][4024] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Dec 13 14:26:44.020030 env[1331]: 2024-12-13 14:26:43.816 [INFO][4024] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" iface="eth0" netns="/var/run/netns/cni-1570a174-a034-177a-b4ac-576797bbf257" Dec 13 14:26:44.020030 env[1331]: 2024-12-13 14:26:43.817 [INFO][4024] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" iface="eth0" netns="/var/run/netns/cni-1570a174-a034-177a-b4ac-576797bbf257" Dec 13 14:26:44.020030 env[1331]: 2024-12-13 14:26:43.817 [INFO][4024] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" iface="eth0" netns="/var/run/netns/cni-1570a174-a034-177a-b4ac-576797bbf257" Dec 13 14:26:44.020030 env[1331]: 2024-12-13 14:26:43.817 [INFO][4024] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Dec 13 14:26:44.020030 env[1331]: 2024-12-13 14:26:43.817 [INFO][4024] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Dec 13 14:26:44.020030 env[1331]: 2024-12-13 14:26:43.991 [INFO][4095] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" HandleID="k8s-pod-network.b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" Dec 13 14:26:44.020030 env[1331]: 2024-12-13 14:26:43.996 [INFO][4095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:44.020030 env[1331]: 2024-12-13 14:26:43.996 [INFO][4095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:44.020030 env[1331]: 2024-12-13 14:26:44.009 [WARNING][4095] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" HandleID="k8s-pod-network.b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" Dec 13 14:26:44.020030 env[1331]: 2024-12-13 14:26:44.009 [INFO][4095] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" HandleID="k8s-pod-network.b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" Dec 13 14:26:44.020030 env[1331]: 2024-12-13 14:26:44.011 [INFO][4095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:44.020030 env[1331]: 2024-12-13 14:26:44.017 [INFO][4024] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Dec 13 14:26:44.031717 env[1331]: time="2024-12-13T14:26:44.031439527Z" level=info msg="TearDown network for sandbox \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\" successfully" Dec 13 14:26:44.031717 env[1331]: time="2024-12-13T14:26:44.031711110Z" level=info msg="StopPodSandbox for \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\" returns successfully" Dec 13 14:26:44.051071 env[1331]: time="2024-12-13T14:26:44.050723097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xl287,Uid:e8e7e321-b490-4f2e-961a-6ba46f4b801a,Namespace:calico-system,Attempt:1,}" Dec 13 14:26:44.135720 env[1331]: time="2024-12-13T14:26:44.129488192Z" level=info msg="StartContainer for \"8be1a39714cd25487e2670fedc2a834d7e8afb57b56692b1ea6029bb6cfd0980\" returns successfully" Dec 13 14:26:44.423298 systemd-networkd[1078]: calib83792b1f20: Link UP Dec 13 14:26:44.440500 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:26:44.440664 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib83792b1f20: link becomes ready Dec 13 14:26:44.454909 systemd-networkd[1078]: calib83792b1f20: Gained carrier Dec 13 14:26:44.455173 systemd-networkd[1078]: calib9e18a47052: Gained IPv6LL Dec 13 14:26:44.455469 systemd-networkd[1078]: cali2d8f7ae0dab: Gained IPv6LL Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.210 [INFO][4133] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0 coredns-76f75df574- kube-system d7e4fbf8-2010-40d9-9761-00fa99980147 822 0 2024-12-13 14:26:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal coredns-76f75df574-fcccw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib83792b1f20 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" Namespace="kube-system" Pod="coredns-76f75df574-fcccw" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-" Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.210 [INFO][4133] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" Namespace="kube-system" Pod="coredns-76f75df574-fcccw" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.302 [INFO][4184] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" HandleID="k8s-pod-network.80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.324 [INFO][4184] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" HandleID="k8s-pod-network.80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333100), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", "pod":"coredns-76f75df574-fcccw", "timestamp":"2024-12-13 14:26:44.302814952 +0000 UTC"}, Hostname:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.324 [INFO][4184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.324 [INFO][4184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.324 [INFO][4184] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal' Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.327 [INFO][4184] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.333 [INFO][4184] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.339 [INFO][4184] ipam/ipam.go 489: Trying affinity for 192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.342 [INFO][4184] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.345 [INFO][4184] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.345 [INFO][4184] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.64/26 handle="k8s-pod-network.80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.347 [INFO][4184] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.352 [INFO][4184] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.64/26 handle="k8s-pod-network.80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.402 [INFO][4184] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.69/26] block=192.168.105.64/26 handle="k8s-pod-network.80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.403 [INFO][4184] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.69/26] handle="k8s-pod-network.80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.403 [INFO][4184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:44.516508 env[1331]: 2024-12-13 14:26:44.403 [INFO][4184] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.69/26] IPv6=[] ContainerID="80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" HandleID="k8s-pod-network.80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" Dec 13 14:26:44.518064 env[1331]: 2024-12-13 14:26:44.405 [INFO][4133] cni-plugin/k8s.go 386: Populated endpoint ContainerID="80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" Namespace="kube-system" Pod="coredns-76f75df574-fcccw" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d7e4fbf8-2010-40d9-9761-00fa99980147", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-76f75df574-fcccw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib83792b1f20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:44.518064 env[1331]: 2024-12-13 14:26:44.405 [INFO][4133] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.69/32] ContainerID="80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" Namespace="kube-system" Pod="coredns-76f75df574-fcccw" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" Dec 13 14:26:44.518064 env[1331]: 2024-12-13 14:26:44.406 [INFO][4133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib83792b1f20 ContainerID="80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" Namespace="kube-system" Pod="coredns-76f75df574-fcccw" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" Dec 13 14:26:44.518064 env[1331]: 2024-12-13 14:26:44.457 [INFO][4133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" Namespace="kube-system" Pod="coredns-76f75df574-fcccw" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" Dec 13 14:26:44.518064 env[1331]: 2024-12-13 14:26:44.458 [INFO][4133] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" Namespace="kube-system" Pod="coredns-76f75df574-fcccw" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d7e4fbf8-2010-40d9-9761-00fa99980147", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc", Pod:"coredns-76f75df574-fcccw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib83792b1f20", MAC:"f2:ed:8c:70:84:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:44.518064 env[1331]: 2024-12-13 14:26:44.505 [INFO][4133] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc" Namespace="kube-system" Pod="coredns-76f75df574-fcccw" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" Dec 13 14:26:44.596023 systemd[1]: run-netns-cni\x2d1570a174\x2da034\x2d177a\x2db4ac\x2d576797bbf257.mount: Deactivated successfully. Dec 13 14:26:44.596224 systemd[1]: run-netns-cni\x2db3a4ebd7\x2dd8f7\x2dfcf0\x2d52d2\x2da6be0360a3cb.mount: Deactivated successfully. Dec 13 14:26:44.609549 systemd-networkd[1078]: calidd45a46f2d6: Link UP Dec 13 14:26:44.625384 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calidd45a46f2d6: link becomes ready Dec 13 14:26:44.627284 systemd-networkd[1078]: calidd45a46f2d6: Gained carrier Dec 13 14:26:44.658795 kubelet[2291]: I1213 14:26:44.658745 2291 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-7f6gg" podStartSLOduration=38.658667108 podStartE2EDuration="38.658667108s" podCreationTimestamp="2024-12-13 14:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:26:44.656664064 +0000 UTC m=+53.575703347" watchObservedRunningTime="2024-12-13 14:26:44.658667108 +0000 UTC m=+53.577706393" Dec 13 14:26:44.669838 env[1331]: time="2024-12-13T14:26:44.669733717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:44.670106 env[1331]: time="2024-12-13T14:26:44.670067270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:44.670273 env[1331]: time="2024-12-13T14:26:44.670238480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.238 [INFO][4156] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0 csi-node-driver- calico-system e8e7e321-b490-4f2e-961a-6ba46f4b801a 821 0 2024-12-13 14:26:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal csi-node-driver-xl287 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidd45a46f2d6 [] []}} ContainerID="c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" Namespace="calico-system" Pod="csi-node-driver-xl287" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-" Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.238 [INFO][4156] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" Namespace="calico-system" Pod="csi-node-driver-xl287" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.375 [INFO][4188] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" HandleID="k8s-pod-network.c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.424 [INFO][4188] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" HandleID="k8s-pod-network.c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310c50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", "pod":"csi-node-driver-xl287", "timestamp":"2024-12-13 14:26:44.375663024 +0000 UTC"}, Hostname:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.424 [INFO][4188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.424 [INFO][4188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.424 [INFO][4188] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal' Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.440 [INFO][4188] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.476 [INFO][4188] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.519 [INFO][4188] ipam/ipam.go 489: Trying affinity for 192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.522 [INFO][4188] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.526 [INFO][4188] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.64/26 host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.526 [INFO][4188] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.64/26 handle="k8s-pod-network.c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.528 [INFO][4188] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5 Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.540 [INFO][4188] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.64/26 handle="k8s-pod-network.c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.566 [INFO][4188] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.70/26] block=192.168.105.64/26 handle="k8s-pod-network.c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.566 [INFO][4188] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.70/26] handle="k8s-pod-network.c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" host="ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal" Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.566 [INFO][4188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:44.672474 env[1331]: 2024-12-13 14:26:44.567 [INFO][4188] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.70/26] IPv6=[] ContainerID="c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" HandleID="k8s-pod-network.c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" Dec 13 14:26:44.673643 env[1331]: 2024-12-13 14:26:44.584 [INFO][4156] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" Namespace="calico-system" Pod="csi-node-driver-xl287" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e8e7e321-b490-4f2e-961a-6ba46f4b801a", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-xl287", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidd45a46f2d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:44.673643 env[1331]: 2024-12-13 14:26:44.584 [INFO][4156] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.70/32] ContainerID="c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" Namespace="calico-system" Pod="csi-node-driver-xl287" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" Dec 13 14:26:44.673643 env[1331]: 2024-12-13 14:26:44.584 [INFO][4156] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd45a46f2d6 ContainerID="c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" Namespace="calico-system" Pod="csi-node-driver-xl287" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" Dec 13 14:26:44.673643 env[1331]: 2024-12-13 14:26:44.631 [INFO][4156] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" Namespace="calico-system" Pod="csi-node-driver-xl287" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" Dec 13 14:26:44.673643 env[1331]: 2024-12-13 14:26:44.631 [INFO][4156] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" Namespace="calico-system" Pod="csi-node-driver-xl287" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e8e7e321-b490-4f2e-961a-6ba46f4b801a", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5", Pod:"csi-node-driver-xl287", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidd45a46f2d6", MAC:"92:12:6d:a7:e5:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:44.673643 env[1331]: 2024-12-13 14:26:44.665 [INFO][4156] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5" Namespace="calico-system" Pod="csi-node-driver-xl287" WorkloadEndpoint="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" Dec 13 14:26:44.678413 env[1331]: time="2024-12-13T14:26:44.670629874Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc pid=4218 runtime=io.containerd.runc.v2 Dec 13 14:26:44.738000 audit[4237]: NETFILTER_CFG table=filter:105 family=2 entries=16 op=nft_register_rule pid=4237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:44.738000 audit[4237]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffcb0585030 a2=0 a3=7ffcb058501c items=0 ppid=2447 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:44.738000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:44.744000 audit[4237]: NETFILTER_CFG table=nat:106 family=2 entries=14 op=nft_register_rule pid=4237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:44.744000 audit[4237]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcb0585030 a2=0 a3=0 items=0 ppid=2447 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:44.744000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:44.745000 audit[4232]: NETFILTER_CFG table=filter:107 family=2 entries=48 op=nft_register_chain pid=4232 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:26:44.745000 audit[4232]: SYSCALL arch=c000003e syscall=46 success=yes exit=23448 a0=3 a1=7fff0ba91dc0 a2=0 a3=7fff0ba91dac items=0 ppid=3476 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:44.745000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:26:44.804000 audit[4265]: NETFILTER_CFG table=filter:108 family=2 entries=13 op=nft_register_rule pid=4265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:44.804000 audit[4265]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc8ed89c00 a2=0 a3=7ffc8ed89bec items=0 ppid=2447 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:44.804000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:44.811000 audit[4265]: NETFILTER_CFG table=nat:109 family=2 entries=35 op=nft_register_chain pid=4265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:44.811000 audit[4265]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc8ed89c00 a2=0 a3=7ffc8ed89bec items=0 ppid=2447 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:44.811000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:44.838733 env[1331]: time="2024-12-13T14:26:44.838636238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:44.839019 env[1331]: time="2024-12-13T14:26:44.838979119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:44.839180 env[1331]: time="2024-12-13T14:26:44.839147770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:44.839560 env[1331]: time="2024-12-13T14:26:44.839504344Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5 pid=4270 runtime=io.containerd.runc.v2 Dec 13 14:26:44.859000 audit[4279]: NETFILTER_CFG table=filter:110 family=2 entries=46 op=nft_register_chain pid=4279 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:26:44.859000 audit[4279]: SYSCALL arch=c000003e syscall=46 success=yes exit=22188 a0=3 a1=7fffa8bca060 a2=0 a3=7fffa8bca04c items=0 ppid=3476 pid=4279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:44.859000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:26:44.890211 env[1331]: time="2024-12-13T14:26:44.890142174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fcccw,Uid:d7e4fbf8-2010-40d9-9761-00fa99980147,Namespace:kube-system,Attempt:1,} returns sandbox id \"80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc\"" Dec 13 14:26:44.900390 env[1331]: time="2024-12-13T14:26:44.898563409Z" level=info msg="CreateContainer within sandbox \"80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:26:44.946617 env[1331]: time="2024-12-13T14:26:44.945576938Z" level=info msg="CreateContainer within sandbox \"80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d76d0f524f463e3feeebbc9c0c42f1ac1c35e0331259b008b635630274eae9ca\"" Dec 13 14:26:44.950788 env[1331]: time="2024-12-13T14:26:44.950744228Z" level=info msg="StartContainer for \"d76d0f524f463e3feeebbc9c0c42f1ac1c35e0331259b008b635630274eae9ca\"" Dec 13 14:26:44.993887 env[1331]: time="2024-12-13T14:26:44.993830882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xl287,Uid:e8e7e321-b490-4f2e-961a-6ba46f4b801a,Namespace:calico-system,Attempt:1,} returns sandbox id \"c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5\"" Dec 13 14:26:45.060417 env[1331]: time="2024-12-13T14:26:45.060322179Z" level=info msg="StartContainer for \"d76d0f524f463e3feeebbc9c0c42f1ac1c35e0331259b008b635630274eae9ca\" returns successfully" Dec 13 14:26:45.402060 systemd-networkd[1078]: cali85f4bb9fd58: Gained IPv6LL Dec 13 14:26:45.584632 systemd[1]: run-containerd-runc-k8s.io-c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5-runc.xg7AHU.mount: Deactivated successfully. Dec 13 14:26:45.685954 env[1331]: time="2024-12-13T14:26:45.685778815Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:45.691945 kubelet[2291]: I1213 14:26:45.691903 2291 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-fcccw" podStartSLOduration=39.691844578 podStartE2EDuration="39.691844578s" podCreationTimestamp="2024-12-13 14:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:26:45.674027781 +0000 UTC m=+54.593067062" watchObservedRunningTime="2024-12-13 14:26:45.691844578 +0000 UTC m=+54.610883862" Dec 13 14:26:45.703397 env[1331]: time="2024-12-13T14:26:45.703317092Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:45.707307 env[1331]: time="2024-12-13T14:26:45.707257996Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:45.710468 env[1331]: time="2024-12-13T14:26:45.710419750Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:45.713278 env[1331]: time="2024-12-13T14:26:45.711451767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 14:26:45.725384 env[1331]: time="2024-12-13T14:26:45.720887119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 14:26:45.730405 env[1331]: time="2024-12-13T14:26:45.729283272Z" level=info msg="CreateContainer within sandbox \"e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 14:26:45.748000 audit[4360]: NETFILTER_CFG table=filter:111 family=2 entries=10 op=nft_register_rule pid=4360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:45.748000 audit[4360]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fffbb517550 a2=0 a3=7fffbb51753c items=0 ppid=2447 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:45.748000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:45.754933 env[1331]: time="2024-12-13T14:26:45.754879255Z" level=info msg="CreateContainer within sandbox \"e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"570154f9dfa413c9e8c770406263dbe1aedfe21470bdb3732e9a7cc514aeb40e\"" Dec 13 14:26:45.756047 env[1331]: time="2024-12-13T14:26:45.756011700Z" level=info msg="StartContainer for \"570154f9dfa413c9e8c770406263dbe1aedfe21470bdb3732e9a7cc514aeb40e\"" Dec 13 14:26:45.766000 audit[4360]: NETFILTER_CFG table=nat:112 family=2 entries=56 op=nft_register_chain pid=4360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:45.766000 audit[4360]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fffbb517550 a2=0 a3=7fffbb51753c items=0 ppid=2447 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:45.766000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:45.888499 env[1331]: time="2024-12-13T14:26:45.888424091Z" level=info msg="StartContainer for \"570154f9dfa413c9e8c770406263dbe1aedfe21470bdb3732e9a7cc514aeb40e\" returns successfully" Dec 13 14:26:46.040704 systemd-networkd[1078]: calidd45a46f2d6: Gained IPv6LL Dec 13 14:26:46.233619 systemd-networkd[1078]: calib83792b1f20: Gained IPv6LL Dec 13 14:26:46.578758 systemd[1]: run-containerd-runc-k8s.io-570154f9dfa413c9e8c770406263dbe1aedfe21470bdb3732e9a7cc514aeb40e-runc.eZS8w8.mount: Deactivated successfully. Dec 13 14:26:46.686409 kubelet[2291]: I1213 14:26:46.686367 2291 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8c579667c-4vgnc" podStartSLOduration=25.725210588 podStartE2EDuration="29.686286662s" podCreationTimestamp="2024-12-13 14:26:17 +0000 UTC" firstStartedPulling="2024-12-13 14:26:41.751776143 +0000 UTC m=+50.670815404" lastFinishedPulling="2024-12-13 14:26:45.712852201 +0000 UTC m=+54.631891478" observedRunningTime="2024-12-13 14:26:46.68559762 +0000 UTC m=+55.604636905" watchObservedRunningTime="2024-12-13 14:26:46.686286662 +0000 UTC m=+55.605325944" Dec 13 14:26:47.737719 systemd[1]: run-containerd-runc-k8s.io-570154f9dfa413c9e8c770406263dbe1aedfe21470bdb3732e9a7cc514aeb40e-runc.h92W1d.mount: Deactivated successfully. Dec 13 14:26:48.532145 env[1331]: time="2024-12-13T14:26:48.532100373Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:48.536590 env[1331]: time="2024-12-13T14:26:48.536511699Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:48.539432 env[1331]: time="2024-12-13T14:26:48.539385208Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:48.541862 env[1331]: time="2024-12-13T14:26:48.541818379Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:48.542305 env[1331]: time="2024-12-13T14:26:48.542246850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 14:26:48.544860 env[1331]: time="2024-12-13T14:26:48.544081016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 14:26:48.547078 env[1331]: time="2024-12-13T14:26:48.547034803Z" level=info msg="CreateContainer within sandbox \"e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 14:26:48.571598 env[1331]: time="2024-12-13T14:26:48.571550880Z" level=info msg="CreateContainer within sandbox \"e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2b2bad558516cd623ddaa1f4f6508472bcb3af4742ba8275084dc7cdca9c46ef\"" Dec 13 14:26:48.572653 env[1331]: time="2024-12-13T14:26:48.572609939Z" level=info msg="StartContainer for \"2b2bad558516cd623ddaa1f4f6508472bcb3af4742ba8275084dc7cdca9c46ef\"" Dec 13 14:26:48.677388 env[1331]: time="2024-12-13T14:26:48.676517114Z" level=info msg="StartContainer for \"2b2bad558516cd623ddaa1f4f6508472bcb3af4742ba8275084dc7cdca9c46ef\" returns successfully" Dec 13 14:26:48.737000 audit[4450]: NETFILTER_CFG table=filter:113 family=2 entries=10 op=nft_register_rule pid=4450 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:48.743437 kernel: kauditd_printk_skb: 25 callbacks suppressed Dec 13 14:26:48.743589 kernel: audit: type=1325 audit(1734100008.737:403): table=filter:113 family=2 entries=10 op=nft_register_rule pid=4450 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:48.737000 audit[4450]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd66f07a00 a2=0 a3=7ffd66f079ec items=0 ppid=2447 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:48.790116 env[1331]: time="2024-12-13T14:26:48.789983516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:48.737000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:48.795262 env[1331]: time="2024-12-13T14:26:48.794797086Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:48.799085 env[1331]: time="2024-12-13T14:26:48.799041971Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:48.803916 env[1331]: time="2024-12-13T14:26:48.803867519Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:48.804607 env[1331]: time="2024-12-13T14:26:48.804558429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 14:26:48.810302 kernel: audit: type=1300 audit(1734100008.737:403): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd66f07a00 a2=0 a3=7ffd66f079ec items=0 ppid=2447 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:48.810441 kernel: audit: type=1327 audit(1734100008.737:403): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:48.812180 env[1331]: time="2024-12-13T14:26:48.810985223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 14:26:48.760000 audit[4450]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=4450 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:48.829752 env[1331]: time="2024-12-13T14:26:48.813766822Z" level=info msg="CreateContainer within sandbox \"408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 14:26:48.863319 kernel: audit: type=1325 audit(1734100008.760:404): table=nat:114 family=2 entries=20 op=nft_register_rule pid=4450 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:48.863487 kernel: audit: type=1300 audit(1734100008.760:404): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd66f07a00 a2=0 a3=7ffd66f079ec items=0 ppid=2447 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:48.760000 audit[4450]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd66f07a00 a2=0 a3=7ffd66f079ec items=0 ppid=2447 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:48.858880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount875508183.mount: Deactivated successfully. Dec 13 14:26:48.760000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:48.885382 kernel: audit: type=1327 audit(1734100008.760:404): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:48.887322 env[1331]: time="2024-12-13T14:26:48.887256296Z" level=info msg="CreateContainer within sandbox \"408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"99dbccc1a76b428715c2afaaca52e7a6e9bb8c0f2b7f443a1fe73be396387d41\"" Dec 13 14:26:48.888428 env[1331]: time="2024-12-13T14:26:48.888386823Z" level=info msg="StartContainer for \"99dbccc1a76b428715c2afaaca52e7a6e9bb8c0f2b7f443a1fe73be396387d41\"" Dec 13 14:26:49.044127 env[1331]: time="2024-12-13T14:26:49.043311136Z" level=info msg="StartContainer for \"99dbccc1a76b428715c2afaaca52e7a6e9bb8c0f2b7f443a1fe73be396387d41\" returns successfully" Dec 13 14:26:49.706259 kubelet[2291]: I1213 14:26:49.706216 2291 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67448fdc7d-2zp8b" podStartSLOduration=28.78597006 podStartE2EDuration="33.706154107s" podCreationTimestamp="2024-12-13 14:26:16 +0000 UTC" firstStartedPulling="2024-12-13 14:26:43.886886606 +0000 UTC m=+52.805925871" lastFinishedPulling="2024-12-13 14:26:48.80707065 +0000 UTC m=+57.726109918" observedRunningTime="2024-12-13 14:26:49.703900573 +0000 UTC m=+58.622939856" watchObservedRunningTime="2024-12-13 14:26:49.706154107 +0000 UTC m=+58.625193403" Dec 13 14:26:49.707146 kubelet[2291]: I1213 14:26:49.707121 2291 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67448fdc7d-mzvdz" podStartSLOduration=28.760746216 podStartE2EDuration="33.707068328s" podCreationTimestamp="2024-12-13 14:26:16 +0000 UTC" firstStartedPulling="2024-12-13 14:26:43.597461755 +0000 UTC m=+52.516501021" lastFinishedPulling="2024-12-13 14:26:48.543783858 +0000 UTC m=+57.462823133" observedRunningTime="2024-12-13 14:26:48.702067488 +0000 UTC m=+57.621106794" watchObservedRunningTime="2024-12-13 14:26:49.707068328 +0000 UTC m=+58.626107612" Dec 13 14:26:49.715536 systemd[1]: run-containerd-runc-k8s.io-99dbccc1a76b428715c2afaaca52e7a6e9bb8c0f2b7f443a1fe73be396387d41-runc.pWsect.mount: Deactivated successfully. Dec 13 14:26:49.741000 audit[4492]: NETFILTER_CFG table=filter:115 family=2 entries=10 op=nft_register_rule pid=4492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:49.759412 kernel: audit: type=1325 audit(1734100009.741:405): table=filter:115 family=2 entries=10 op=nft_register_rule pid=4492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:49.741000 audit[4492]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe3cce7f00 a2=0 a3=7ffe3cce7eec items=0 ppid=2447 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:49.794395 kernel: audit: type=1300 audit(1734100009.741:405): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe3cce7f00 a2=0 a3=7ffe3cce7eec items=0 ppid=2447 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:49.741000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:49.812385 kernel: audit: type=1327 audit(1734100009.741:405): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:49.796000 audit[4492]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:49.855385 kernel: audit: type=1325 audit(1734100009.796:406): table=nat:116 family=2 entries=20 op=nft_register_rule pid=4492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:49.796000 audit[4492]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe3cce7f00 a2=0 a3=7ffe3cce7eec items=0 ppid=2447 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:49.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:50.504979 env[1331]: time="2024-12-13T14:26:50.504922189Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:50.508680 env[1331]: time="2024-12-13T14:26:50.508630179Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:50.512098 env[1331]: time="2024-12-13T14:26:50.512052195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:50.515446 env[1331]: time="2024-12-13T14:26:50.515401186Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:50.517177 env[1331]: time="2024-12-13T14:26:50.516346867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 14:26:50.519688 env[1331]: time="2024-12-13T14:26:50.519630544Z" level=info msg="CreateContainer within sandbox \"c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 14:26:50.541609 env[1331]: time="2024-12-13T14:26:50.541554336Z" level=info msg="CreateContainer within sandbox \"c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b86ceef8e86da43e78f02e9c4a51a8d398815e8adea889ba46a5990eb1e3ea66\"" Dec 13 14:26:50.542494 env[1331]: time="2024-12-13T14:26:50.542453708Z" level=info msg="StartContainer for \"b86ceef8e86da43e78f02e9c4a51a8d398815e8adea889ba46a5990eb1e3ea66\"" Dec 13 14:26:50.613560 systemd[1]: run-containerd-runc-k8s.io-b86ceef8e86da43e78f02e9c4a51a8d398815e8adea889ba46a5990eb1e3ea66-runc.z20Yip.mount: Deactivated successfully. Dec 13 14:26:50.692146 kubelet[2291]: I1213 14:26:50.692108 2291 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:26:50.732847 env[1331]: time="2024-12-13T14:26:50.732707943Z" level=info msg="StartContainer for \"b86ceef8e86da43e78f02e9c4a51a8d398815e8adea889ba46a5990eb1e3ea66\" returns successfully" Dec 13 14:26:50.734558 env[1331]: time="2024-12-13T14:26:50.734515905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 14:26:51.383572 env[1331]: time="2024-12-13T14:26:51.383518384Z" level=info msg="StopPodSandbox for \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\"" Dec 13 14:26:51.643861 env[1331]: 2024-12-13 14:26:51.476 [WARNING][4555] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d7e4fbf8-2010-40d9-9761-00fa99980147", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc", Pod:"coredns-76f75df574-fcccw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib83792b1f20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:51.643861 env[1331]: 2024-12-13 14:26:51.477 [INFO][4555] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Dec 13 14:26:51.643861 env[1331]: 2024-12-13 14:26:51.477 [INFO][4555] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" iface="eth0" netns="" Dec 13 14:26:51.643861 env[1331]: 2024-12-13 14:26:51.477 [INFO][4555] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Dec 13 14:26:51.643861 env[1331]: 2024-12-13 14:26:51.477 [INFO][4555] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Dec 13 14:26:51.643861 env[1331]: 2024-12-13 14:26:51.605 [INFO][4561] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" HandleID="k8s-pod-network.3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" Dec 13 14:26:51.643861 env[1331]: 2024-12-13 14:26:51.605 [INFO][4561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:51.643861 env[1331]: 2024-12-13 14:26:51.605 [INFO][4561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:51.643861 env[1331]: 2024-12-13 14:26:51.618 [WARNING][4561] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" HandleID="k8s-pod-network.3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" Dec 13 14:26:51.643861 env[1331]: 2024-12-13 14:26:51.619 [INFO][4561] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" HandleID="k8s-pod-network.3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" Dec 13 14:26:51.643861 env[1331]: 2024-12-13 14:26:51.625 [INFO][4561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:51.643861 env[1331]: 2024-12-13 14:26:51.639 [INFO][4555] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Dec 13 14:26:51.645557 env[1331]: time="2024-12-13T14:26:51.645504804Z" level=info msg="TearDown network for sandbox \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\" successfully" Dec 13 14:26:51.645777 env[1331]: time="2024-12-13T14:26:51.645738204Z" level=info msg="StopPodSandbox for \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\" returns successfully" Dec 13 14:26:51.646796 env[1331]: time="2024-12-13T14:26:51.646715548Z" level=info msg="RemovePodSandbox for \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\"" Dec 13 14:26:51.647061 env[1331]: time="2024-12-13T14:26:51.646999675Z" level=info msg="Forcibly stopping sandbox \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\"" Dec 13 14:26:51.966928 env[1331]: 2024-12-13 14:26:51.835 [WARNING][4581] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d7e4fbf8-2010-40d9-9761-00fa99980147", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"80f3c91aea0a0d7f98acbdf265ed1dabbe0c0e2bcd02c8d8d99c00eb8e1197cc", Pod:"coredns-76f75df574-fcccw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib83792b1f20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:51.966928 env[1331]: 2024-12-13 14:26:51.836 [INFO][4581] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Dec 13 14:26:51.966928 env[1331]: 2024-12-13 14:26:51.836 [INFO][4581] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" iface="eth0" netns="" Dec 13 14:26:51.966928 env[1331]: 2024-12-13 14:26:51.836 [INFO][4581] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Dec 13 14:26:51.966928 env[1331]: 2024-12-13 14:26:51.836 [INFO][4581] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Dec 13 14:26:51.966928 env[1331]: 2024-12-13 14:26:51.943 [INFO][4587] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" HandleID="k8s-pod-network.3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" Dec 13 14:26:51.966928 env[1331]: 2024-12-13 14:26:51.945 [INFO][4587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:51.966928 env[1331]: 2024-12-13 14:26:51.945 [INFO][4587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:51.966928 env[1331]: 2024-12-13 14:26:51.955 [WARNING][4587] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" HandleID="k8s-pod-network.3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" Dec 13 14:26:51.966928 env[1331]: 2024-12-13 14:26:51.955 [INFO][4587] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" HandleID="k8s-pod-network.3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--fcccw-eth0" Dec 13 14:26:51.966928 env[1331]: 2024-12-13 14:26:51.957 [INFO][4587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:51.966928 env[1331]: 2024-12-13 14:26:51.964 [INFO][4581] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2" Dec 13 14:26:51.968539 env[1331]: time="2024-12-13T14:26:51.966882137Z" level=info msg="TearDown network for sandbox \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\" successfully" Dec 13 14:26:51.975601 env[1331]: time="2024-12-13T14:26:51.975546037Z" level=info msg="RemovePodSandbox \"3045ae0635abff47845e22f97c57f9df5b697b90c7e89679362bf7ab95fcbbb2\" returns successfully" Dec 13 14:26:51.976272 env[1331]: time="2024-12-13T14:26:51.976229163Z" level=info msg="StopPodSandbox for \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\"" Dec 13 14:26:52.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.74:22-139.178.68.195:42738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:52.013702 systemd[1]: Started sshd@10-10.128.0.74:22-139.178.68.195:42738.service. Dec 13 14:26:52.313160 env[1331]: 2024-12-13 14:26:52.157 [WARNING][4606] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0", GenerateName:"calico-apiserver-67448fdc7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"344bd3be-fc59-4092-b9f4-59fe040c7639", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67448fdc7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b", Pod:"calico-apiserver-67448fdc7d-mzvdz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9e18a47052", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:52.313160 env[1331]: 2024-12-13 14:26:52.158 [INFO][4606] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Dec 13 14:26:52.313160 env[1331]: 2024-12-13 14:26:52.158 [INFO][4606] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" iface="eth0" netns="" Dec 13 14:26:52.313160 env[1331]: 2024-12-13 14:26:52.158 [INFO][4606] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Dec 13 14:26:52.313160 env[1331]: 2024-12-13 14:26:52.158 [INFO][4606] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Dec 13 14:26:52.313160 env[1331]: 2024-12-13 14:26:52.252 [INFO][4613] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" HandleID="k8s-pod-network.b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" Dec 13 14:26:52.313160 env[1331]: 2024-12-13 14:26:52.253 [INFO][4613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:52.313160 env[1331]: 2024-12-13 14:26:52.253 [INFO][4613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:52.313160 env[1331]: 2024-12-13 14:26:52.305 [WARNING][4613] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" HandleID="k8s-pod-network.b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" Dec 13 14:26:52.313160 env[1331]: 2024-12-13 14:26:52.306 [INFO][4613] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" HandleID="k8s-pod-network.b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" Dec 13 14:26:52.313160 env[1331]: 2024-12-13 14:26:52.309 [INFO][4613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:52.313160 env[1331]: 2024-12-13 14:26:52.311 [INFO][4606] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Dec 13 14:26:52.314061 env[1331]: time="2024-12-13T14:26:52.313134308Z" level=info msg="TearDown network for sandbox \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\" successfully" Dec 13 14:26:52.314061 env[1331]: time="2024-12-13T14:26:52.313896678Z" level=info msg="StopPodSandbox for \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\" returns successfully" Dec 13 14:26:52.314739 env[1331]: time="2024-12-13T14:26:52.314696484Z" level=info msg="RemovePodSandbox for \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\"" Dec 13 14:26:52.314889 env[1331]: time="2024-12-13T14:26:52.314749423Z" level=info msg="Forcibly stopping sandbox \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\"" Dec 13 14:26:52.345000 audit[4605]: USER_ACCT pid=4605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:26:52.348954 sshd[4605]: Accepted publickey for core from 139.178.68.195 port 42738 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:26:52.348000 audit[4605]: CRED_ACQ pid=4605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:26:52.348000 audit[4605]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2f9a4980 a2=3 a3=0 items=0 ppid=1 pid=4605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:52.348000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:26:52.351304 sshd[4605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:52.362526 systemd[1]: Started session-8.scope. Dec 13 14:26:52.362885 systemd-logind[1316]: New session 8 of user core. Dec 13 14:26:52.384000 audit[4605]: USER_START pid=4605 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:26:52.389000 audit[4634]: CRED_ACQ pid=4634 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:26:52.618274 env[1331]: 2024-12-13 14:26:52.487 [WARNING][4635] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0", GenerateName:"calico-apiserver-67448fdc7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"344bd3be-fc59-4092-b9f4-59fe040c7639", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67448fdc7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"e07ccba1d5a974c1f306500738373667b2bfdb7c22d12a1c6f51a7b506da3b7b", Pod:"calico-apiserver-67448fdc7d-mzvdz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9e18a47052", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:52.618274 env[1331]: 2024-12-13 14:26:52.487 [INFO][4635] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Dec 13 14:26:52.618274 env[1331]: 2024-12-13 14:26:52.488 [INFO][4635] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" iface="eth0" netns="" Dec 13 14:26:52.618274 env[1331]: 2024-12-13 14:26:52.488 [INFO][4635] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Dec 13 14:26:52.618274 env[1331]: 2024-12-13 14:26:52.488 [INFO][4635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Dec 13 14:26:52.618274 env[1331]: 2024-12-13 14:26:52.586 [INFO][4641] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" HandleID="k8s-pod-network.b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" Dec 13 14:26:52.618274 env[1331]: 2024-12-13 14:26:52.587 [INFO][4641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:52.618274 env[1331]: 2024-12-13 14:26:52.587 [INFO][4641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:52.618274 env[1331]: 2024-12-13 14:26:52.598 [WARNING][4641] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" HandleID="k8s-pod-network.b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" Dec 13 14:26:52.618274 env[1331]: 2024-12-13 14:26:52.598 [INFO][4641] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" HandleID="k8s-pod-network.b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--mzvdz-eth0" Dec 13 14:26:52.618274 env[1331]: 2024-12-13 14:26:52.600 [INFO][4641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:52.618274 env[1331]: 2024-12-13 14:26:52.616 [INFO][4635] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36" Dec 13 14:26:52.619414 env[1331]: time="2024-12-13T14:26:52.619324421Z" level=info msg="TearDown network for sandbox \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\" successfully" Dec 13 14:26:52.628794 env[1331]: time="2024-12-13T14:26:52.628738875Z" level=info msg="RemovePodSandbox \"b9c25cbf08ec9e71193d4d15dcfe59f7f4edf0a41e57099c35285d43f3c3ff36\" returns successfully" Dec 13 14:26:52.631465 env[1331]: time="2024-12-13T14:26:52.631426243Z" level=info msg="StopPodSandbox for \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\"" Dec 13 14:26:52.837559 sshd[4605]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:52.849000 audit[4605]: USER_END pid=4605 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:26:52.849000 audit[4605]: CRED_DISP pid=4605 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:26:52.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.74:22-139.178.68.195:42738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:52.853669 systemd[1]: sshd@10-10.128.0.74:22-139.178.68.195:42738.service: Deactivated successfully. Dec 13 14:26:52.855024 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:26:52.860916 systemd-logind[1316]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:26:52.863556 systemd-logind[1316]: Removed session 8. Dec 13 14:26:53.020476 env[1331]: 2024-12-13 14:26:52.873 [WARNING][4666] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e8fdff54-c28c-49b3-874a-502acca12325", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34", Pod:"coredns-76f75df574-7f6gg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85f4bb9fd58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:53.020476 env[1331]: 2024-12-13 14:26:52.875 [INFO][4666] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Dec 13 14:26:53.020476 env[1331]: 2024-12-13 14:26:52.875 [INFO][4666] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" iface="eth0" netns="" Dec 13 14:26:53.020476 env[1331]: 2024-12-13 14:26:52.875 [INFO][4666] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Dec 13 14:26:53.020476 env[1331]: 2024-12-13 14:26:52.876 [INFO][4666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Dec 13 14:26:53.020476 env[1331]: 2024-12-13 14:26:52.992 [INFO][4674] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" HandleID="k8s-pod-network.7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" Dec 13 14:26:53.020476 env[1331]: 2024-12-13 14:26:52.994 [INFO][4674] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:53.020476 env[1331]: 2024-12-13 14:26:52.994 [INFO][4674] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:53.020476 env[1331]: 2024-12-13 14:26:53.004 [WARNING][4674] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" HandleID="k8s-pod-network.7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" Dec 13 14:26:53.020476 env[1331]: 2024-12-13 14:26:53.004 [INFO][4674] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" HandleID="k8s-pod-network.7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" Dec 13 14:26:53.020476 env[1331]: 2024-12-13 14:26:53.011 [INFO][4674] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:53.020476 env[1331]: 2024-12-13 14:26:53.016 [INFO][4666] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Dec 13 14:26:53.020476 env[1331]: time="2024-12-13T14:26:53.018125914Z" level=info msg="TearDown network for sandbox \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\" successfully" Dec 13 14:26:53.020476 env[1331]: time="2024-12-13T14:26:53.018173862Z" level=info msg="StopPodSandbox for \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\" returns successfully" Dec 13 14:26:53.020476 env[1331]: time="2024-12-13T14:26:53.018972964Z" level=info msg="RemovePodSandbox for \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\"" Dec 13 14:26:53.020476 env[1331]: time="2024-12-13T14:26:53.019018892Z" level=info msg="Forcibly stopping sandbox \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\"" Dec 13 14:26:53.044914 env[1331]: time="2024-12-13T14:26:53.041075932Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:53.053196 env[1331]: time="2024-12-13T14:26:53.053135884Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:53.083337 env[1331]: time="2024-12-13T14:26:53.075808610Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:53.083776 env[1331]: time="2024-12-13T14:26:53.079577242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 14:26:53.088565 env[1331]: time="2024-12-13T14:26:53.084693154Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:53.088947 env[1331]: time="2024-12-13T14:26:53.087087763Z" level=info msg="CreateContainer within sandbox \"c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 14:26:53.133637 env[1331]: time="2024-12-13T14:26:53.133567286Z" level=info msg="CreateContainer within sandbox \"c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c2381720c32fd6f8e4d9524fcf8124df3c2bb01dc1cbed85fdd013f061e882d9\"" Dec 13 14:26:53.136922 env[1331]: time="2024-12-13T14:26:53.136868569Z" level=info msg="StartContainer for \"c2381720c32fd6f8e4d9524fcf8124df3c2bb01dc1cbed85fdd013f061e882d9\"" Dec 13 14:26:53.274295 env[1331]: 2024-12-13 14:26:53.145 [WARNING][4695] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e8fdff54-c28c-49b3-874a-502acca12325", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"c35c4226579bd4f14d4132579daff3769676e85a7c53e4d3273d761812478e34", Pod:"coredns-76f75df574-7f6gg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85f4bb9fd58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:53.274295 env[1331]: 2024-12-13 14:26:53.149 [INFO][4695] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Dec 13 14:26:53.274295 env[1331]: 2024-12-13 14:26:53.150 [INFO][4695] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" iface="eth0" netns="" Dec 13 14:26:53.274295 env[1331]: 2024-12-13 14:26:53.150 [INFO][4695] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Dec 13 14:26:53.274295 env[1331]: 2024-12-13 14:26:53.151 [INFO][4695] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Dec 13 14:26:53.274295 env[1331]: 2024-12-13 14:26:53.251 [INFO][4707] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" HandleID="k8s-pod-network.7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" Dec 13 14:26:53.274295 env[1331]: 2024-12-13 14:26:53.251 [INFO][4707] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:53.274295 env[1331]: 2024-12-13 14:26:53.251 [INFO][4707] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:53.274295 env[1331]: 2024-12-13 14:26:53.265 [WARNING][4707] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" HandleID="k8s-pod-network.7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" Dec 13 14:26:53.274295 env[1331]: 2024-12-13 14:26:53.265 [INFO][4707] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" HandleID="k8s-pod-network.7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-coredns--76f75df574--7f6gg-eth0" Dec 13 14:26:53.274295 env[1331]: 2024-12-13 14:26:53.266 [INFO][4707] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:53.274295 env[1331]: 2024-12-13 14:26:53.268 [INFO][4695] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e" Dec 13 14:26:53.274295 env[1331]: time="2024-12-13T14:26:53.273064354Z" level=info msg="TearDown network for sandbox \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\" successfully" Dec 13 14:26:53.287400 env[1331]: time="2024-12-13T14:26:53.286590906Z" level=info msg="RemovePodSandbox \"7594ca7c27a331d5d8bc6b043d2be702bbf8c84a02f5516b884d0a26bce3dd5e\" returns successfully" Dec 13 14:26:53.287400 env[1331]: time="2024-12-13T14:26:53.287239320Z" level=info msg="StopPodSandbox for \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\"" Dec 13 14:26:53.385400 env[1331]: time="2024-12-13T14:26:53.385299345Z" level=info msg="StartContainer for \"c2381720c32fd6f8e4d9524fcf8124df3c2bb01dc1cbed85fdd013f061e882d9\" returns successfully" Dec 13 14:26:53.457409 env[1331]: 2024-12-13 14:26:53.409 [WARNING][4741] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0", GenerateName:"calico-apiserver-67448fdc7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"35d24807-89c8-42de-a1a0-7e24511228d9", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67448fdc7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c", Pod:"calico-apiserver-67448fdc7d-2zp8b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d8f7ae0dab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:53.457409 env[1331]: 2024-12-13 14:26:53.410 [INFO][4741] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Dec 13 14:26:53.457409 env[1331]: 2024-12-13 14:26:53.410 [INFO][4741] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" iface="eth0" netns="" Dec 13 14:26:53.457409 env[1331]: 2024-12-13 14:26:53.410 [INFO][4741] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Dec 13 14:26:53.457409 env[1331]: 2024-12-13 14:26:53.410 [INFO][4741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Dec 13 14:26:53.457409 env[1331]: 2024-12-13 14:26:53.436 [INFO][4761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" HandleID="k8s-pod-network.e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" Dec 13 14:26:53.457409 env[1331]: 2024-12-13 14:26:53.436 [INFO][4761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:53.457409 env[1331]: 2024-12-13 14:26:53.436 [INFO][4761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:53.457409 env[1331]: 2024-12-13 14:26:53.444 [WARNING][4761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" HandleID="k8s-pod-network.e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" Dec 13 14:26:53.457409 env[1331]: 2024-12-13 14:26:53.444 [INFO][4761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" HandleID="k8s-pod-network.e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" Dec 13 14:26:53.457409 env[1331]: 2024-12-13 14:26:53.446 [INFO][4761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:53.457409 env[1331]: 2024-12-13 14:26:53.448 [INFO][4741] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Dec 13 14:26:53.457409 env[1331]: time="2024-12-13T14:26:53.454976088Z" level=info msg="TearDown network for sandbox \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\" successfully" Dec 13 14:26:53.457409 env[1331]: time="2024-12-13T14:26:53.455056749Z" level=info msg="StopPodSandbox for \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\" returns successfully" Dec 13 14:26:53.457409 env[1331]: time="2024-12-13T14:26:53.455797509Z" level=info msg="RemovePodSandbox for \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\"" Dec 13 14:26:53.457409 env[1331]: time="2024-12-13T14:26:53.455841805Z" level=info msg="Forcibly stopping sandbox \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\"" Dec 13 14:26:53.551775 kubelet[2291]: I1213 14:26:53.550397 2291 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 14:26:53.551775 kubelet[2291]: I1213 14:26:53.550446 2291 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 14:26:53.684888 env[1331]: 2024-12-13 14:26:53.545 [WARNING][4781] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0", GenerateName:"calico-apiserver-67448fdc7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"35d24807-89c8-42de-a1a0-7e24511228d9", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67448fdc7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"408eaa8550283787a651a1203a2c034b4a6f3723e405e33135ffe94c7a181d6c", Pod:"calico-apiserver-67448fdc7d-2zp8b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d8f7ae0dab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:53.684888 env[1331]: 2024-12-13 14:26:53.555 [INFO][4781] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Dec 13 14:26:53.684888 env[1331]: 2024-12-13 14:26:53.556 [INFO][4781] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" iface="eth0" netns="" Dec 13 14:26:53.684888 env[1331]: 2024-12-13 14:26:53.557 [INFO][4781] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Dec 13 14:26:53.684888 env[1331]: 2024-12-13 14:26:53.557 [INFO][4781] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Dec 13 14:26:53.684888 env[1331]: 2024-12-13 14:26:53.668 [INFO][4787] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" HandleID="k8s-pod-network.e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" Dec 13 14:26:53.684888 env[1331]: 2024-12-13 14:26:53.668 [INFO][4787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:53.684888 env[1331]: 2024-12-13 14:26:53.668 [INFO][4787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:53.684888 env[1331]: 2024-12-13 14:26:53.679 [WARNING][4787] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" HandleID="k8s-pod-network.e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" Dec 13 14:26:53.684888 env[1331]: 2024-12-13 14:26:53.679 [INFO][4787] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" HandleID="k8s-pod-network.e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--apiserver--67448fdc7d--2zp8b-eth0" Dec 13 14:26:53.684888 env[1331]: 2024-12-13 14:26:53.681 [INFO][4787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:53.684888 env[1331]: 2024-12-13 14:26:53.683 [INFO][4781] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891" Dec 13 14:26:53.685900 env[1331]: time="2024-12-13T14:26:53.685848881Z" level=info msg="TearDown network for sandbox \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\" successfully" Dec 13 14:26:53.693589 env[1331]: time="2024-12-13T14:26:53.693499197Z" level=info msg="RemovePodSandbox \"e68adf546f7646ee2d650f0a8d6645b8c88a123977c31d856d6ff9b106fce891\" returns successfully" Dec 13 14:26:53.694487 env[1331]: time="2024-12-13T14:26:53.694445425Z" level=info msg="StopPodSandbox for \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\"" Dec 13 14:26:53.801675 kubelet[2291]: I1213 14:26:53.801250 2291 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-xl287" podStartSLOduration=28.712712743 podStartE2EDuration="36.801163094s" podCreationTimestamp="2024-12-13 14:26:17 +0000 UTC" firstStartedPulling="2024-12-13 14:26:44.995764866 +0000 UTC m=+53.914804125" lastFinishedPulling="2024-12-13 14:26:53.084214947 +0000 UTC m=+62.003254476" observedRunningTime="2024-12-13 14:26:53.800114207 +0000 UTC m=+62.719153482" watchObservedRunningTime="2024-12-13 14:26:53.801163094 +0000 UTC m=+62.720202378" Dec 13 14:26:53.838547 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 13 14:26:53.838730 kernel: audit: type=1325 audit(1734100013.831:416): table=filter:117 family=2 entries=9 op=nft_register_rule pid=4814 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:53.831000 audit[4814]: NETFILTER_CFG table=filter:117 family=2 entries=9 op=nft_register_rule pid=4814 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:53.831000 audit[4814]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffda7ac0af0 a2=0 a3=7ffda7ac0adc items=0 ppid=2447 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:53.890046 kernel: audit: type=1300 audit(1734100013.831:416): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffda7ac0af0 a2=0 a3=7ffda7ac0adc items=0 ppid=2447 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:53.892446 kernel: audit: type=1327 audit(1734100013.831:416): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:53.831000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:53.906000 audit[4814]: NETFILTER_CFG table=nat:118 family=2 entries=27 op=nft_register_chain pid=4814 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:53.926202 kernel: audit: type=1325 audit(1734100013.906:417): table=nat:118 family=2 entries=27 op=nft_register_chain pid=4814 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:26:53.906000 audit[4814]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffda7ac0af0 a2=0 a3=7ffda7ac0adc items=0 ppid=2447 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:53.966388 kernel: audit: type=1300 audit(1734100013.906:417): arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffda7ac0af0 a2=0 a3=7ffda7ac0adc items=0 ppid=2447 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:53.906000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:53.993450 kernel: audit: type=1327 audit(1734100013.906:417): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:26:54.029457 env[1331]: 2024-12-13 14:26:53.893 [WARNING][4807] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e8e7e321-b490-4f2e-961a-6ba46f4b801a", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5", Pod:"csi-node-driver-xl287", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidd45a46f2d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:54.029457 env[1331]: 2024-12-13 14:26:53.893 [INFO][4807] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Dec 13 14:26:54.029457 env[1331]: 2024-12-13 14:26:53.894 [INFO][4807] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" iface="eth0" netns="" Dec 13 14:26:54.029457 env[1331]: 2024-12-13 14:26:53.894 [INFO][4807] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Dec 13 14:26:54.029457 env[1331]: 2024-12-13 14:26:53.894 [INFO][4807] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Dec 13 14:26:54.029457 env[1331]: 2024-12-13 14:26:54.005 [INFO][4815] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" HandleID="k8s-pod-network.b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" Dec 13 14:26:54.029457 env[1331]: 2024-12-13 14:26:54.005 [INFO][4815] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:54.029457 env[1331]: 2024-12-13 14:26:54.006 [INFO][4815] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:54.029457 env[1331]: 2024-12-13 14:26:54.019 [WARNING][4815] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" HandleID="k8s-pod-network.b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" Dec 13 14:26:54.029457 env[1331]: 2024-12-13 14:26:54.019 [INFO][4815] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" HandleID="k8s-pod-network.b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" Dec 13 14:26:54.029457 env[1331]: 2024-12-13 14:26:54.025 [INFO][4815] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:54.029457 env[1331]: 2024-12-13 14:26:54.027 [INFO][4807] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Dec 13 14:26:54.030974 env[1331]: time="2024-12-13T14:26:54.030905838Z" level=info msg="TearDown network for sandbox \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\" successfully" Dec 13 14:26:54.031162 env[1331]: time="2024-12-13T14:26:54.031128291Z" level=info msg="StopPodSandbox for \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\" returns successfully" Dec 13 14:26:54.032062 env[1331]: time="2024-12-13T14:26:54.032020541Z" level=info msg="RemovePodSandbox for \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\"" Dec 13 14:26:54.032302 env[1331]: time="2024-12-13T14:26:54.032231424Z" level=info msg="Forcibly stopping sandbox \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\"" Dec 13 14:26:54.146854 env[1331]: 2024-12-13 14:26:54.103 [WARNING][4835] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e8e7e321-b490-4f2e-961a-6ba46f4b801a", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"c59f221b7bc5a6b979157b8af35a783d623382b6ca768405ce07ca9373db3ab5", Pod:"csi-node-driver-xl287", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidd45a46f2d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:54.146854 env[1331]: 2024-12-13 14:26:54.105 [INFO][4835] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Dec 13 14:26:54.146854 env[1331]: 2024-12-13 14:26:54.105 [INFO][4835] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" iface="eth0" netns="" Dec 13 14:26:54.146854 env[1331]: 2024-12-13 14:26:54.105 [INFO][4835] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Dec 13 14:26:54.146854 env[1331]: 2024-12-13 14:26:54.105 [INFO][4835] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Dec 13 14:26:54.146854 env[1331]: 2024-12-13 14:26:54.135 [INFO][4841] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" HandleID="k8s-pod-network.b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" Dec 13 14:26:54.146854 env[1331]: 2024-12-13 14:26:54.135 [INFO][4841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:54.146854 env[1331]: 2024-12-13 14:26:54.135 [INFO][4841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:54.146854 env[1331]: 2024-12-13 14:26:54.142 [WARNING][4841] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" HandleID="k8s-pod-network.b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" Dec 13 14:26:54.146854 env[1331]: 2024-12-13 14:26:54.142 [INFO][4841] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" HandleID="k8s-pod-network.b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-csi--node--driver--xl287-eth0" Dec 13 14:26:54.146854 env[1331]: 2024-12-13 14:26:54.144 [INFO][4841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:54.146854 env[1331]: 2024-12-13 14:26:54.145 [INFO][4835] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f" Dec 13 14:26:54.147753 env[1331]: time="2024-12-13T14:26:54.146900858Z" level=info msg="TearDown network for sandbox \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\" successfully" Dec 13 14:26:54.152641 env[1331]: time="2024-12-13T14:26:54.152586451Z" level=info msg="RemovePodSandbox \"b006fd28c9869e0539bcb304d9085d468701b3ec8bc238a06d41e5ce6d9dde4f\" returns successfully" Dec 13 14:26:54.153408 env[1331]: time="2024-12-13T14:26:54.153368468Z" level=info msg="StopPodSandbox for \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\"" Dec 13 14:26:54.247340 env[1331]: 2024-12-13 14:26:54.207 [WARNING][4861] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0", GenerateName:"calico-kube-controllers-8c579667c-", Namespace:"calico-system", SelfLink:"", UID:"c2a62712-176a-4d0f-9d03-df8cafed69c7", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8c579667c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2", Pod:"calico-kube-controllers-8c579667c-4vgnc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1e846fa20f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:54.247340 env[1331]: 2024-12-13 14:26:54.208 [INFO][4861] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Dec 13 14:26:54.247340 env[1331]: 2024-12-13 14:26:54.208 [INFO][4861] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" iface="eth0" netns="" Dec 13 14:26:54.247340 env[1331]: 2024-12-13 14:26:54.208 [INFO][4861] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Dec 13 14:26:54.247340 env[1331]: 2024-12-13 14:26:54.208 [INFO][4861] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Dec 13 14:26:54.247340 env[1331]: 2024-12-13 14:26:54.235 [INFO][4867] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" HandleID="k8s-pod-network.7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" Dec 13 14:26:54.247340 env[1331]: 2024-12-13 14:26:54.235 [INFO][4867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:54.247340 env[1331]: 2024-12-13 14:26:54.235 [INFO][4867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:54.247340 env[1331]: 2024-12-13 14:26:54.243 [WARNING][4867] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" HandleID="k8s-pod-network.7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" Dec 13 14:26:54.247340 env[1331]: 2024-12-13 14:26:54.243 [INFO][4867] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" HandleID="k8s-pod-network.7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" Dec 13 14:26:54.247340 env[1331]: 2024-12-13 14:26:54.244 [INFO][4867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:54.247340 env[1331]: 2024-12-13 14:26:54.245 [INFO][4861] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Dec 13 14:26:54.248285 env[1331]: time="2024-12-13T14:26:54.247398656Z" level=info msg="TearDown network for sandbox \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\" successfully" Dec 13 14:26:54.248285 env[1331]: time="2024-12-13T14:26:54.247442976Z" level=info msg="StopPodSandbox for \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\" returns successfully" Dec 13 14:26:54.248285 env[1331]: time="2024-12-13T14:26:54.248162369Z" level=info msg="RemovePodSandbox for \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\"" Dec 13 14:26:54.248285 env[1331]: time="2024-12-13T14:26:54.248207029Z" level=info msg="Forcibly stopping sandbox \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\"" Dec 13 14:26:54.336858 env[1331]: 2024-12-13 14:26:54.295 [WARNING][4885] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0", GenerateName:"calico-kube-controllers-8c579667c-", Namespace:"calico-system", SelfLink:"", UID:"c2a62712-176a-4d0f-9d03-df8cafed69c7", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8c579667c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-e9d13b183bac47f9af1b.c.flatcar-212911.internal", ContainerID:"e0c3d3754a1d8bdf186600a064792d9244ff1811fd697572803908c91b9023e2", Pod:"calico-kube-controllers-8c579667c-4vgnc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1e846fa20f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:26:54.336858 env[1331]: 2024-12-13 14:26:54.296 [INFO][4885] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Dec 13 14:26:54.336858 env[1331]: 2024-12-13 14:26:54.296 [INFO][4885] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" iface="eth0" netns="" Dec 13 14:26:54.336858 env[1331]: 2024-12-13 14:26:54.296 [INFO][4885] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Dec 13 14:26:54.336858 env[1331]: 2024-12-13 14:26:54.296 [INFO][4885] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Dec 13 14:26:54.336858 env[1331]: 2024-12-13 14:26:54.325 [INFO][4891] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" HandleID="k8s-pod-network.7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" Dec 13 14:26:54.336858 env[1331]: 2024-12-13 14:26:54.326 [INFO][4891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:26:54.336858 env[1331]: 2024-12-13 14:26:54.326 [INFO][4891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:26:54.336858 env[1331]: 2024-12-13 14:26:54.332 [WARNING][4891] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" HandleID="k8s-pod-network.7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" Dec 13 14:26:54.336858 env[1331]: 2024-12-13 14:26:54.332 [INFO][4891] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" HandleID="k8s-pod-network.7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Workload="ci--3510--3--6--e9d13b183bac47f9af1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--8c579667c--4vgnc-eth0" Dec 13 14:26:54.336858 env[1331]: 2024-12-13 14:26:54.334 [INFO][4891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:26:54.336858 env[1331]: 2024-12-13 14:26:54.335 [INFO][4885] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b" Dec 13 14:26:54.337741 env[1331]: time="2024-12-13T14:26:54.336904167Z" level=info msg="TearDown network for sandbox \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\" successfully" Dec 13 14:26:54.342822 env[1331]: time="2024-12-13T14:26:54.342691555Z" level=info msg="RemovePodSandbox \"7cf1832ebb313d3700d5b84a027cd6a78c31acdbeb60d36a23d6d8e3a6a2237b\" returns successfully" Dec 13 14:26:57.906899 kernel: audit: type=1130 audit(1734100017.881:418): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.74:22-139.178.68.195:34636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:57.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.74:22-139.178.68.195:34636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:57.881161 systemd[1]: Started sshd@11-10.128.0.74:22-139.178.68.195:34636.service. Dec 13 14:26:58.215265 kernel: audit: type=1101 audit(1734100018.182:419): pid=4899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:26:58.182000 audit[4899]: USER_ACCT pid=4899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:26:58.214427 sshd[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:58.216097 sshd[4899]: Accepted publickey for core from 139.178.68.195 port 34636 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:26:58.213000 audit[4899]: CRED_ACQ pid=4899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:26:58.232112 systemd[1]: Started session-9.scope. Dec 13 14:26:58.233447 systemd-logind[1316]: New session 9 of user core. Dec 13 14:26:58.244420 kernel: audit: type=1103 audit(1734100018.213:420): pid=4899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:26:58.213000 audit[4899]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff09a65840 a2=3 a3=0 items=0 ppid=1 pid=4899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:58.261560 kernel: audit: type=1006 audit(1734100018.213:421): pid=4899 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 13 14:26:58.213000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:26:58.243000 audit[4899]: USER_START pid=4899 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:26:58.243000 audit[4902]: CRED_ACQ pid=4902 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:26:58.511907 sshd[4899]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:58.512000 audit[4899]: USER_END pid=4899 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:26:58.513000 audit[4899]: CRED_DISP pid=4899 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:26:58.517540 systemd[1]: sshd@11-10.128.0.74:22-139.178.68.195:34636.service: Deactivated successfully. Dec 13 14:26:58.518945 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:26:58.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.74:22-139.178.68.195:34636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:58.520109 systemd-logind[1316]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:26:58.523168 systemd-logind[1316]: Removed session 9. Dec 13 14:26:58.629397 systemd[1]: Started sshd@12-10.128.0.74:22-143.198.125.142:39100.service. Dec 13 14:26:58.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.74:22-143.198.125.142:39100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:58.797614 systemd[1]: run-containerd-runc-k8s.io-570154f9dfa413c9e8c770406263dbe1aedfe21470bdb3732e9a7cc514aeb40e-runc.qBiOq3.mount: Deactivated successfully. Dec 13 14:26:58.913848 sshd[4912]: Invalid user pete from 143.198.125.142 port 39100 Dec 13 14:26:58.922000 audit[4912]: USER_AUTH pid=4912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="pete" exe="/usr/sbin/sshd" hostname=143.198.125.142 addr=143.198.125.142 terminal=ssh res=failed' Dec 13 14:26:58.924197 sshd[4912]: Failed password for invalid user pete from 143.198.125.142 port 39100 ssh2 Dec 13 14:26:58.929140 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 13 14:26:58.929274 kernel: audit: type=1100 audit(1734100018.922:428): pid=4912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="pete" exe="/usr/sbin/sshd" hostname=143.198.125.142 addr=143.198.125.142 terminal=ssh res=failed' Dec 13 14:26:58.959486 sshd[4912]: Received disconnect from 143.198.125.142 port 39100:11: Bye Bye [preauth] Dec 13 14:26:58.959486 sshd[4912]: Disconnected from invalid user pete 143.198.125.142 port 39100 [preauth] Dec 13 14:26:58.987916 kernel: audit: type=1131 audit(1734100018.960:429): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.74:22-143.198.125.142:39100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:58.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.74:22-143.198.125.142:39100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:58.961035 systemd[1]: sshd@12-10.128.0.74:22-143.198.125.142:39100.service: Deactivated successfully. Dec 13 14:27:01.972138 kubelet[2291]: I1213 14:27:01.972096 2291 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:27:02.025000 audit[4963]: NETFILTER_CFG table=filter:119 family=2 entries=8 op=nft_register_rule pid=4963 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:27:02.025000 audit[4963]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd8156b630 a2=0 a3=7ffd8156b61c items=0 ppid=2447 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:02.077177 kernel: audit: type=1325 audit(1734100022.025:430): table=filter:119 family=2 entries=8 op=nft_register_rule pid=4963 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:27:02.077389 kernel: audit: type=1300 audit(1734100022.025:430): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd8156b630 a2=0 a3=7ffd8156b61c items=0 ppid=2447 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:02.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:27:02.079000 audit[4963]: NETFILTER_CFG table=nat:120 family=2 entries=34 op=nft_register_chain pid=4963 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:27:02.109807 kernel: audit: type=1327 audit(1734100022.025:430): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:27:02.109960 kernel: audit: type=1325 audit(1734100022.079:431): table=nat:120 family=2 entries=34 op=nft_register_chain pid=4963 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:27:02.110042 kernel: audit: type=1300 audit(1734100022.079:431): arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffd8156b630 a2=0 a3=7ffd8156b61c items=0 ppid=2447 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:02.079000 audit[4963]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffd8156b630 a2=0 a3=7ffd8156b61c items=0 ppid=2447 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:02.079000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:27:02.158415 kernel: audit: type=1327 audit(1734100022.079:431): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:27:03.557695 systemd[1]: Started sshd@13-10.128.0.74:22-139.178.68.195:34642.service. Dec 13 14:27:03.584651 kernel: audit: type=1130 audit(1734100023.557:432): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.74:22-139.178.68.195:34642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:03.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.74:22-139.178.68.195:34642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:03.849000 audit[4964]: USER_ACCT pid=4964 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:03.880096 sshd[4964]: Accepted publickey for core from 139.178.68.195 port 34642 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:03.880639 kernel: audit: type=1101 audit(1734100023.849:433): pid=4964 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:03.881072 sshd[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:03.879000 audit[4964]: CRED_ACQ pid=4964 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:03.879000 audit[4964]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1824eab0 a2=3 a3=0 items=0 ppid=1 pid=4964 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:03.879000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:03.891282 systemd[1]: Started session-10.scope. Dec 13 14:27:03.891874 systemd-logind[1316]: New session 10 of user core. Dec 13 14:27:03.902000 audit[4964]: USER_START pid=4964 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:03.905000 audit[4967]: CRED_ACQ pid=4967 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:04.154695 sshd[4964]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:04.156000 audit[4964]: USER_END pid=4964 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:04.162115 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 13 14:27:04.162231 kernel: audit: type=1106 audit(1734100024.156:438): pid=4964 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:04.163697 systemd[1]: sshd@13-10.128.0.74:22-139.178.68.195:34642.service: Deactivated successfully. Dec 13 14:27:04.165035 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:27:04.176490 systemd-logind[1316]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:27:04.178194 systemd-logind[1316]: Removed session 10. Dec 13 14:27:04.197259 kernel: audit: type=1104 audit(1734100024.156:439): pid=4964 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:04.156000 audit[4964]: CRED_DISP pid=4964 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:04.223302 kernel: audit: type=1131 audit(1734100024.161:440): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.74:22-139.178.68.195:34642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:04.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.74:22-139.178.68.195:34642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:04.226039 systemd[1]: Started sshd@14-10.128.0.74:22-139.178.68.195:34658.service. Dec 13 14:27:04.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.74:22-139.178.68.195:34658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:04.272399 kernel: audit: type=1130 audit(1734100024.226:441): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.74:22-139.178.68.195:34658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:04.527000 audit[4977]: USER_ACCT pid=4977 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:04.558866 sshd[4977]: Accepted publickey for core from 139.178.68.195 port 34658 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:04.559569 kernel: audit: type=1101 audit(1734100024.527:442): pid=4977 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:04.560556 sshd[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:04.558000 audit[4977]: CRED_ACQ pid=4977 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:04.569695 systemd[1]: Started session-11.scope. Dec 13 14:27:04.570970 systemd-logind[1316]: New session 11 of user core. Dec 13 14:27:04.587628 kernel: audit: type=1103 audit(1734100024.558:443): pid=4977 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:04.604713 kernel: audit: type=1006 audit(1734100024.559:444): pid=4977 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 13 14:27:04.604862 kernel: audit: type=1300 audit(1734100024.559:444): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdaf069530 a2=3 a3=0 items=0 ppid=1 pid=4977 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:04.559000 audit[4977]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdaf069530 a2=3 a3=0 items=0 ppid=1 pid=4977 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:04.559000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:04.587000 audit[4977]: USER_START pid=4977 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:04.642378 kernel: audit: type=1327 audit(1734100024.559:444): proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:04.642432 kernel: audit: type=1105 audit(1734100024.587:445): pid=4977 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:04.593000 audit[4980]: CRED_ACQ pid=4980 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:04.893675 sshd[4977]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:04.895000 audit[4977]: USER_END pid=4977 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:04.896000 audit[4977]: CRED_DISP pid=4977 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:04.899807 systemd-logind[1316]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:27:04.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.74:22-139.178.68.195:34658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:04.901225 systemd[1]: sshd@14-10.128.0.74:22-139.178.68.195:34658.service: Deactivated successfully. Dec 13 14:27:04.903508 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:27:04.905827 systemd-logind[1316]: Removed session 11. Dec 13 14:27:04.936777 systemd[1]: Started sshd@15-10.128.0.74:22-139.178.68.195:34662.service. Dec 13 14:27:04.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.74:22-139.178.68.195:34662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.230000 audit[4989]: USER_ACCT pid=4989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:05.232040 sshd[4989]: Accepted publickey for core from 139.178.68.195 port 34662 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:05.232000 audit[4989]: CRED_ACQ pid=4989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:05.233000 audit[4989]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd330acc80 a2=3 a3=0 items=0 ppid=1 pid=4989 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:05.233000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:05.234183 sshd[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:05.242718 systemd[1]: Started session-12.scope. Dec 13 14:27:05.243063 systemd-logind[1316]: New session 12 of user core. Dec 13 14:27:05.251000 audit[4989]: USER_START pid=4989 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:05.255000 audit[4992]: CRED_ACQ pid=4992 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:05.570009 sshd[4989]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:05.572000 audit[4989]: USER_END pid=4989 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:05.572000 audit[4989]: CRED_DISP pid=4989 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:05.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.74:22-139.178.68.195:34662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.575643 systemd-logind[1316]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:27:05.575946 systemd[1]: sshd@15-10.128.0.74:22-139.178.68.195:34662.service: Deactivated successfully. Dec 13 14:27:05.578403 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:27:05.579191 systemd-logind[1316]: Removed session 12. Dec 13 14:27:07.684377 systemd[1]: run-containerd-runc-k8s.io-570154f9dfa413c9e8c770406263dbe1aedfe21470bdb3732e9a7cc514aeb40e-runc.BP3cPe.mount: Deactivated successfully. Dec 13 14:27:10.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.74:22-139.178.68.195:41970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.615758 systemd[1]: Started sshd@16-10.128.0.74:22-139.178.68.195:41970.service. Dec 13 14:27:10.621549 kernel: kauditd_printk_skb: 15 callbacks suppressed Dec 13 14:27:10.621682 kernel: audit: type=1130 audit(1734100030.614:459): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.74:22-139.178.68.195:41970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.947021 kernel: audit: type=1101 audit(1734100030.915:460): pid=5023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:10.915000 audit[5023]: USER_ACCT pid=5023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:10.948957 sshd[5023]: Accepted publickey for core from 139.178.68.195 port 41970 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:10.947759 sshd[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:10.984857 kernel: audit: type=1103 audit(1734100030.945:461): pid=5023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:10.984981 kernel: audit: type=1006 audit(1734100030.945:462): pid=5023 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 13 14:27:10.945000 audit[5023]: CRED_ACQ pid=5023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:10.957973 systemd[1]: Started session-13.scope. Dec 13 14:27:10.958903 systemd-logind[1316]: New session 13 of user core. Dec 13 14:27:10.945000 audit[5023]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb1f16960 a2=3 a3=0 items=0 ppid=1 pid=5023 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:11.024757 kernel: audit: type=1300 audit(1734100030.945:462): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb1f16960 a2=3 a3=0 items=0 ppid=1 pid=5023 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:11.026414 kernel: audit: type=1327 audit(1734100030.945:462): proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:10.945000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:10.984000 audit[5023]: USER_START pid=5023 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:11.035397 kernel: audit: type=1105 audit(1734100030.984:463): pid=5023 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:10.989000 audit[5026]: CRED_ACQ pid=5026 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:11.067557 kernel: audit: type=1103 audit(1734100030.989:464): pid=5026 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:11.247623 sshd[5023]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:11.248000 audit[5023]: USER_END pid=5023 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:11.282511 kernel: audit: type=1106 audit(1734100031.248:465): pid=5023 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:11.248000 audit[5023]: CRED_DISP pid=5023 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:11.289461 systemd[1]: sshd@16-10.128.0.74:22-139.178.68.195:41970.service: Deactivated successfully. Dec 13 14:27:11.292030 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:27:11.293199 systemd-logind[1316]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:27:11.295147 systemd-logind[1316]: Removed session 13. Dec 13 14:27:11.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.74:22-139.178.68.195:41970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.310387 kernel: audit: type=1104 audit(1734100031.248:466): pid=5023 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:16.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.74:22-139.178.68.195:46662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.292877 systemd[1]: Started sshd@17-10.128.0.74:22-139.178.68.195:46662.service. Dec 13 14:27:16.298564 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:27:16.298677 kernel: audit: type=1130 audit(1734100036.292:468): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.74:22-139.178.68.195:46662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.593000 audit[5040]: USER_ACCT pid=5040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:16.594075 sshd[5040]: Accepted publickey for core from 139.178.68.195 port 46662 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:16.623389 kernel: audit: type=1101 audit(1734100036.593:469): pid=5040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:16.623000 audit[5040]: CRED_ACQ pid=5040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:16.635218 systemd[1]: Started session-14.scope. Dec 13 14:27:16.625775 sshd[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:16.636535 systemd-logind[1316]: New session 14 of user core. Dec 13 14:27:16.654936 kernel: audit: type=1103 audit(1734100036.623:470): pid=5040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:16.655072 kernel: audit: type=1006 audit(1734100036.624:471): pid=5040 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 13 14:27:16.624000 audit[5040]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd945bbd70 a2=3 a3=0 items=0 ppid=1 pid=5040 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:16.667786 kernel: audit: type=1300 audit(1734100036.624:471): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd945bbd70 a2=3 a3=0 items=0 ppid=1 pid=5040 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:16.624000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:16.704117 kernel: audit: type=1327 audit(1734100036.624:471): proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:16.649000 audit[5040]: USER_START pid=5040 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:16.736609 kernel: audit: type=1105 audit(1734100036.649:472): pid=5040 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:16.736785 kernel: audit: type=1103 audit(1734100036.654:473): pid=5043 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:16.654000 audit[5043]: CRED_ACQ pid=5043 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:16.904906 sshd[5040]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:16.907000 audit[5040]: USER_END pid=5040 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:16.911464 systemd[1]: sshd@17-10.128.0.74:22-139.178.68.195:46662.service: Deactivated successfully. Dec 13 14:27:16.914168 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:27:16.915123 systemd-logind[1316]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:27:16.917334 systemd-logind[1316]: Removed session 14. Dec 13 14:27:16.941398 kernel: audit: type=1106 audit(1734100036.907:474): pid=5040 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:16.907000 audit[5040]: CRED_DISP pid=5040 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:16.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.74:22-139.178.68.195:46662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.966409 kernel: audit: type=1104 audit(1734100036.907:475): pid=5040 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:21.950548 systemd[1]: Started sshd@18-10.128.0.74:22-139.178.68.195:46664.service. Dec 13 14:27:21.981937 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:27:21.982079 kernel: audit: type=1130 audit(1734100041.950:477): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.74:22-139.178.68.195:46664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.74:22-139.178.68.195:46664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:22.252000 audit[5059]: USER_ACCT pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:22.284084 sshd[5059]: Accepted publickey for core from 139.178.68.195 port 46664 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:22.284661 kernel: audit: type=1101 audit(1734100042.252:478): pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:22.285239 sshd[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:22.282000 audit[5059]: CRED_ACQ pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:22.297945 systemd[1]: Started session-15.scope. Dec 13 14:27:22.299155 systemd-logind[1316]: New session 15 of user core. Dec 13 14:27:22.317806 kernel: audit: type=1103 audit(1734100042.282:479): pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:22.317950 kernel: audit: type=1006 audit(1734100042.283:480): pid=5059 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 13 14:27:22.283000 audit[5059]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd349cb40 a2=3 a3=0 items=0 ppid=1 pid=5059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:22.357794 kernel: audit: type=1300 audit(1734100042.283:480): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd349cb40 a2=3 a3=0 items=0 ppid=1 pid=5059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:22.359311 kernel: audit: type=1327 audit(1734100042.283:480): proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:22.283000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:22.314000 audit[5059]: USER_START pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:22.400310 kernel: audit: type=1105 audit(1734100042.314:481): pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:22.324000 audit[5062]: CRED_ACQ pid=5062 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:22.425443 kernel: audit: type=1103 audit(1734100042.324:482): pid=5062 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:22.600100 sshd[5059]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:22.600000 audit[5059]: USER_END pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:22.613430 systemd[1]: sshd@18-10.128.0.74:22-139.178.68.195:46664.service: Deactivated successfully. Dec 13 14:27:22.614652 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:27:22.617385 systemd-logind[1316]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:27:22.619026 systemd-logind[1316]: Removed session 15. Dec 13 14:27:22.601000 audit[5059]: CRED_DISP pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:22.635412 kernel: audit: type=1106 audit(1734100042.600:483): pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:22.635507 kernel: audit: type=1104 audit(1734100042.601:484): pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:22.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.74:22-139.178.68.195:46664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.646009 systemd[1]: Started sshd@19-10.128.0.74:22-139.178.68.195:51540.service. Dec 13 14:27:27.678723 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:27:27.678900 kernel: audit: type=1130 audit(1734100047.646:486): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.74:22-139.178.68.195:51540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.74:22-139.178.68.195:51540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.941000 audit[5076]: USER_ACCT pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:27.972455 kernel: audit: type=1101 audit(1734100047.941:487): pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:27.972644 sshd[5076]: Accepted publickey for core from 139.178.68.195 port 51540 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:27.973229 sshd[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:27.971000 audit[5076]: CRED_ACQ pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:27.984562 systemd[1]: Started session-16.scope. Dec 13 14:27:27.986916 systemd-logind[1316]: New session 16 of user core. Dec 13 14:27:28.000513 kernel: audit: type=1103 audit(1734100047.971:488): pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:27.971000 audit[5076]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdac446f0 a2=3 a3=0 items=0 ppid=1 pid=5076 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:28.047303 kernel: audit: type=1006 audit(1734100047.971:489): pid=5076 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 13 14:27:28.047501 kernel: audit: type=1300 audit(1734100047.971:489): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdac446f0 a2=3 a3=0 items=0 ppid=1 pid=5076 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:28.047564 kernel: audit: type=1327 audit(1734100047.971:489): proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:27.971000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:27.994000 audit[5076]: USER_START pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:28.089292 kernel: audit: type=1105 audit(1734100047.994:490): pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:28.089511 kernel: audit: type=1103 audit(1734100047.999:491): pid=5079 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:27.999000 audit[5079]: CRED_ACQ pid=5079 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:28.260876 sshd[5076]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:28.263000 audit[5076]: USER_END pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:28.275299 systemd[1]: sshd@19-10.128.0.74:22-139.178.68.195:51540.service: Deactivated successfully. Dec 13 14:27:28.277576 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:27:28.279220 systemd-logind[1316]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:27:28.281392 systemd-logind[1316]: Removed session 16. Dec 13 14:27:28.263000 audit[5076]: CRED_DISP pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:28.297509 kernel: audit: type=1106 audit(1734100048.263:492): pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:28.297600 kernel: audit: type=1104 audit(1734100048.263:493): pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:28.305651 systemd[1]: Started sshd@20-10.128.0.74:22-139.178.68.195:51556.service. Dec 13 14:27:28.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.74:22-139.178.68.195:51540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.74:22-139.178.68.195:51556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.603000 audit[5089]: USER_ACCT pid=5089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:28.603731 sshd[5089]: Accepted publickey for core from 139.178.68.195 port 51556 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:28.604000 audit[5089]: CRED_ACQ pid=5089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:28.604000 audit[5089]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde262edc0 a2=3 a3=0 items=0 ppid=1 pid=5089 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:28.604000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:28.605834 sshd[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:28.613255 systemd[1]: Started session-17.scope. Dec 13 14:27:28.614533 systemd-logind[1316]: New session 17 of user core. Dec 13 14:27:28.623000 audit[5089]: USER_START pid=5089 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:28.626000 audit[5092]: CRED_ACQ pid=5092 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:28.759852 systemd[1]: run-containerd-runc-k8s.io-570154f9dfa413c9e8c770406263dbe1aedfe21470bdb3732e9a7cc514aeb40e-runc.w1SYYY.mount: Deactivated successfully. Dec 13 14:27:28.788186 systemd[1]: run-containerd-runc-k8s.io-c66bbd4c8d3bcaa6d383c434d50c62e8913d93b7f73ec3a3fbcf5a80dc9ded7c-runc.YEUbGU.mount: Deactivated successfully. Dec 13 14:27:29.037204 sshd[5089]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:29.039000 audit[5089]: USER_END pid=5089 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:29.039000 audit[5089]: CRED_DISP pid=5089 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:29.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.74:22-139.178.68.195:51556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:29.042322 systemd[1]: sshd@20-10.128.0.74:22-139.178.68.195:51556.service: Deactivated successfully. Dec 13 14:27:29.044861 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:27:29.045075 systemd-logind[1316]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:27:29.048063 systemd-logind[1316]: Removed session 17. Dec 13 14:27:29.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.74:22-139.178.68.195:51562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:29.081748 systemd[1]: Started sshd@21-10.128.0.74:22-139.178.68.195:51562.service. Dec 13 14:27:29.374000 audit[5141]: USER_ACCT pid=5141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:29.375065 sshd[5141]: Accepted publickey for core from 139.178.68.195 port 51562 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:29.376000 audit[5141]: CRED_ACQ pid=5141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:29.376000 audit[5141]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd2ba8f5b0 a2=3 a3=0 items=0 ppid=1 pid=5141 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:29.376000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:29.377392 sshd[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:29.385127 systemd[1]: Started session-18.scope. Dec 13 14:27:29.386086 systemd-logind[1316]: New session 18 of user core. Dec 13 14:27:29.394000 audit[5141]: USER_START pid=5141 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:29.396000 audit[5144]: CRED_ACQ pid=5144 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:31.601000 audit[5154]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=5154 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:27:31.601000 audit[5154]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffcc1ccc440 a2=0 a3=7ffcc1ccc42c items=0 ppid=2447 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:31.601000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:27:31.606000 audit[5154]: NETFILTER_CFG table=nat:122 family=2 entries=22 op=nft_register_rule pid=5154 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:27:31.606000 audit[5154]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffcc1ccc440 a2=0 a3=0 items=0 ppid=2447 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:31.606000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:27:31.647575 sshd[5141]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:31.647000 audit[5141]: USER_END pid=5141 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:31.648000 audit[5141]: CRED_DISP pid=5141 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:31.648000 audit[5156]: NETFILTER_CFG table=filter:123 family=2 entries=32 op=nft_register_rule pid=5156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:27:31.648000 audit[5156]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffe42b3bb70 a2=0 a3=7ffe42b3bb5c items=0 ppid=2447 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:31.648000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:27:31.652080 systemd-logind[1316]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:27:31.653991 systemd[1]: sshd@21-10.128.0.74:22-139.178.68.195:51562.service: Deactivated successfully. Dec 13 14:27:31.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.74:22-139.178.68.195:51562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.655333 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:27:31.654000 audit[5156]: NETFILTER_CFG table=nat:124 family=2 entries=22 op=nft_register_rule pid=5156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:27:31.654000 audit[5156]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffe42b3bb70 a2=0 a3=0 items=0 ppid=2447 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:31.654000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:27:31.660326 systemd-logind[1316]: Removed session 18. Dec 13 14:27:31.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.74:22-139.178.68.195:51574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.689954 systemd[1]: Started sshd@22-10.128.0.74:22-139.178.68.195:51574.service. Dec 13 14:27:31.994000 audit[5159]: USER_ACCT pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:31.998011 sshd[5159]: Accepted publickey for core from 139.178.68.195 port 51574 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:31.997000 audit[5159]: CRED_ACQ pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:31.997000 audit[5159]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3fd71af0 a2=3 a3=0 items=0 ppid=1 pid=5159 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:31.997000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:31.999968 sshd[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:32.008931 systemd-logind[1316]: New session 19 of user core. Dec 13 14:27:32.009708 systemd[1]: Started session-19.scope. Dec 13 14:27:32.017000 audit[5159]: USER_START pid=5159 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:32.020000 audit[5162]: CRED_ACQ pid=5162 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:32.429340 sshd[5159]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:32.430000 audit[5159]: USER_END pid=5159 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:32.430000 audit[5159]: CRED_DISP pid=5159 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:32.434818 systemd[1]: sshd@22-10.128.0.74:22-139.178.68.195:51574.service: Deactivated successfully. Dec 13 14:27:32.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.74:22-139.178.68.195:51574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:32.436860 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:27:32.436873 systemd-logind[1316]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:27:32.440315 systemd-logind[1316]: Removed session 19. Dec 13 14:27:32.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.74:22-139.178.68.195:51584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:32.473414 systemd[1]: Started sshd@23-10.128.0.74:22-139.178.68.195:51584.service. Dec 13 14:27:32.771568 kernel: kauditd_printk_skb: 47 callbacks suppressed Dec 13 14:27:32.771753 kernel: audit: type=1101 audit(1734100052.759:527): pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:32.759000 audit[5170]: USER_ACCT pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:32.768328 sshd[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:32.772370 sshd[5170]: Accepted publickey for core from 139.178.68.195 port 51584 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:32.788081 systemd[1]: Started session-20.scope. Dec 13 14:27:32.789427 systemd-logind[1316]: New session 20 of user core. Dec 13 14:27:32.766000 audit[5170]: CRED_ACQ pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:32.834425 kernel: audit: type=1103 audit(1734100052.766:528): pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:32.834689 kernel: audit: type=1006 audit(1734100052.766:529): pid=5170 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 13 14:27:32.766000 audit[5170]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff07f239b0 a2=3 a3=0 items=0 ppid=1 pid=5170 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:32.848482 kernel: audit: type=1300 audit(1734100052.766:529): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff07f239b0 a2=3 a3=0 items=0 ppid=1 pid=5170 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:32.766000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:32.876394 kernel: audit: type=1327 audit(1734100052.766:529): proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:32.798000 audit[5170]: USER_START pid=5170 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:32.918008 kernel: audit: type=1105 audit(1734100052.798:530): pid=5170 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:32.918200 kernel: audit: type=1103 audit(1734100052.801:531): pid=5173 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:32.801000 audit[5173]: CRED_ACQ pid=5173 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:33.064974 sshd[5170]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:33.065000 audit[5170]: USER_END pid=5170 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:33.078315 systemd[1]: sshd@23-10.128.0.74:22-139.178.68.195:51584.service: Deactivated successfully. Dec 13 14:27:33.079583 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:27:33.081653 systemd-logind[1316]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:27:33.083316 systemd-logind[1316]: Removed session 20. Dec 13 14:27:33.065000 audit[5170]: CRED_DISP pid=5170 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:33.100390 kernel: audit: type=1106 audit(1734100053.065:532): pid=5170 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:33.100451 kernel: audit: type=1104 audit(1734100053.065:533): pid=5170 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:33.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.74:22-139.178.68.195:51584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:33.125415 kernel: audit: type=1131 audit(1734100053.075:534): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.74:22-139.178.68.195:51584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:38.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.74:22-139.178.68.195:47760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:38.109923 systemd[1]: Started sshd@24-10.128.0.74:22-139.178.68.195:47760.service. Dec 13 14:27:38.136387 kernel: audit: type=1130 audit(1734100058.108:535): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.74:22-139.178.68.195:47760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:38.434000 audit[5185]: USER_ACCT pid=5185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:38.438484 sshd[5185]: Accepted publickey for core from 139.178.68.195 port 47760 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:38.466469 kernel: audit: type=1101 audit(1734100058.434:536): pid=5185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:38.467955 sshd[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:38.480165 systemd[1]: Started session-21.scope. Dec 13 14:27:38.482025 systemd-logind[1316]: New session 21 of user core. Dec 13 14:27:38.465000 audit[5185]: CRED_ACQ pid=5185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:38.515406 kernel: audit: type=1103 audit(1734100058.465:537): pid=5185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:38.550393 kernel: audit: type=1006 audit(1734100058.465:538): pid=5185 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 13 14:27:38.465000 audit[5185]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd16e614a0 a2=3 a3=0 items=0 ppid=1 pid=5185 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:38.579397 kernel: audit: type=1300 audit(1734100058.465:538): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd16e614a0 a2=3 a3=0 items=0 ppid=1 pid=5185 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:38.465000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:38.524000 audit[5185]: USER_START pid=5185 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:38.623428 kernel: audit: type=1327 audit(1734100058.465:538): proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:38.623568 kernel: audit: type=1105 audit(1734100058.524:539): pid=5185 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:38.527000 audit[5188]: CRED_ACQ pid=5188 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:38.631000 audit[5190]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:27:38.666830 kernel: audit: type=1103 audit(1734100058.527:540): pid=5188 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:38.666995 kernel: audit: type=1325 audit(1734100058.631:541): table=filter:125 family=2 entries=20 op=nft_register_rule pid=5190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:27:38.681416 kernel: audit: type=1300 audit(1734100058.631:541): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc7c216590 a2=0 a3=7ffc7c21657c items=0 ppid=2447 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:38.631000 audit[5190]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc7c216590 a2=0 a3=7ffc7c21657c items=0 ppid=2447 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:38.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:27:38.704000 audit[5190]: NETFILTER_CFG table=nat:126 family=2 entries=106 op=nft_register_chain pid=5190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:27:38.704000 audit[5190]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffc7c216590 a2=0 a3=7ffc7c21657c items=0 ppid=2447 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:38.704000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:27:38.890307 sshd[5185]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:38.890000 audit[5185]: USER_END pid=5185 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:38.891000 audit[5185]: CRED_DISP pid=5185 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:38.895714 systemd[1]: sshd@24-10.128.0.74:22-139.178.68.195:47760.service: Deactivated successfully. Dec 13 14:27:38.897094 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:27:38.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.74:22-139.178.68.195:47760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:38.897442 systemd-logind[1316]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:27:38.899687 systemd-logind[1316]: Removed session 21. Dec 13 14:27:43.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.128.0.74:22-139.178.68.195:47774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:43.936791 systemd[1]: Started sshd@25-10.128.0.74:22-139.178.68.195:47774.service. Dec 13 14:27:43.942378 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 14:27:43.942513 kernel: audit: type=1130 audit(1734100063.936:546): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.128.0.74:22-139.178.68.195:47774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:44.240000 audit[5201]: USER_ACCT pid=5201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:44.241216 sshd[5201]: Accepted publickey for core from 139.178.68.195 port 47774 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:44.271664 kernel: audit: type=1101 audit(1734100064.240:547): pid=5201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:44.271000 audit[5201]: CRED_ACQ pid=5201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:44.274986 sshd[5201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:44.304146 systemd-logind[1316]: New session 22 of user core. Dec 13 14:27:44.306719 systemd[1]: Started session-22.scope. Dec 13 14:27:44.315295 kernel: audit: type=1103 audit(1734100064.271:548): pid=5201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:44.315451 kernel: audit: type=1006 audit(1734100064.271:549): pid=5201 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 13 14:27:44.315507 kernel: audit: type=1300 audit(1734100064.271:549): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8e0bb270 a2=3 a3=0 items=0 ppid=1 pid=5201 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:44.271000 audit[5201]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8e0bb270 a2=3 a3=0 items=0 ppid=1 pid=5201 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:44.271000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:44.353574 kernel: audit: type=1327 audit(1734100064.271:549): proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:44.327000 audit[5201]: USER_START pid=5201 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:44.361070 kernel: audit: type=1105 audit(1734100064.327:550): pid=5201 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:44.330000 audit[5204]: CRED_ACQ pid=5204 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:44.411408 kernel: audit: type=1103 audit(1734100064.330:551): pid=5204 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:44.616132 sshd[5201]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:44.617000 audit[5201]: USER_END pid=5201 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:44.621590 systemd-logind[1316]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:27:44.623733 systemd[1]: sshd@25-10.128.0.74:22-139.178.68.195:47774.service: Deactivated successfully. Dec 13 14:27:44.625043 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:27:44.626883 systemd-logind[1316]: Removed session 22. Dec 13 14:27:44.651397 kernel: audit: type=1106 audit(1734100064.617:552): pid=5201 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:44.617000 audit[5201]: CRED_DISP pid=5201 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:44.676780 kernel: audit: type=1104 audit(1734100064.617:553): pid=5201 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:44.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.128.0.74:22-139.178.68.195:47774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:49.661808 systemd[1]: Started sshd@26-10.128.0.74:22-139.178.68.195:36634.service. Dec 13 14:27:49.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.128.0.74:22-139.178.68.195:36634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:49.668200 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:27:49.668308 kernel: audit: type=1130 audit(1734100069.661:555): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.128.0.74:22-139.178.68.195:36634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:49.956000 audit[5214]: USER_ACCT pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:49.960491 sshd[5214]: Accepted publickey for core from 139.178.68.195 port 36634 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:49.988413 kernel: audit: type=1101 audit(1734100069.956:556): pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:49.989654 sshd[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:49.987000 audit[5214]: CRED_ACQ pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:50.002548 systemd[1]: Started session-23.scope. Dec 13 14:27:50.003584 systemd-logind[1316]: New session 23 of user core. Dec 13 14:27:50.017383 kernel: audit: type=1103 audit(1734100069.987:557): pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:49.987000 audit[5214]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd93d6e10 a2=3 a3=0 items=0 ppid=1 pid=5214 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:50.037456 kernel: audit: type=1006 audit(1734100069.987:558): pid=5214 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 13 14:27:50.037547 kernel: audit: type=1300 audit(1734100069.987:558): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd93d6e10 a2=3 a3=0 items=0 ppid=1 pid=5214 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:49.987000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:50.065399 kernel: audit: type=1327 audit(1734100069.987:558): proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:50.009000 audit[5214]: USER_START pid=5214 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:50.107401 kernel: audit: type=1105 audit(1734100070.009:559): pid=5214 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:50.107593 kernel: audit: type=1103 audit(1734100070.017:560): pid=5217 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:50.017000 audit[5217]: CRED_ACQ pid=5217 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:50.272265 sshd[5214]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:50.272000 audit[5214]: USER_END pid=5214 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:50.273000 audit[5214]: CRED_DISP pid=5214 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:50.310850 systemd-logind[1316]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:27:50.313732 systemd[1]: sshd@26-10.128.0.74:22-139.178.68.195:36634.service: Deactivated successfully. Dec 13 14:27:50.315014 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:27:50.317536 systemd-logind[1316]: Removed session 23. Dec 13 14:27:50.331426 kernel: audit: type=1106 audit(1734100070.272:561): pid=5214 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:50.331588 kernel: audit: type=1104 audit(1734100070.273:562): pid=5214 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:50.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.128.0.74:22-139.178.68.195:36634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:55.350085 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:27:55.350265 kernel: audit: type=1130 audit(1734100075.318:564): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.128.0.74:22-139.178.68.195:36642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:55.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.128.0.74:22-139.178.68.195:36642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:55.319580 systemd[1]: Started sshd@27-10.128.0.74:22-139.178.68.195:36642.service. Dec 13 14:27:55.622000 audit[5229]: USER_ACCT pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:55.653520 kernel: audit: type=1101 audit(1734100075.622:565): pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:55.654093 sshd[5229]: Accepted publickey for core from 139.178.68.195 port 36642 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:55.652000 audit[5229]: CRED_ACQ pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:55.654956 sshd[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:55.674685 systemd[1]: Started session-24.scope. Dec 13 14:27:55.676148 systemd-logind[1316]: New session 24 of user core. Dec 13 14:27:55.680403 kernel: audit: type=1103 audit(1734100075.652:566): pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:55.707959 kernel: audit: type=1006 audit(1734100075.652:567): pid=5229 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 13 14:27:55.708088 kernel: audit: type=1300 audit(1734100075.652:567): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd89b30950 a2=3 a3=0 items=0 ppid=1 pid=5229 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:55.652000 audit[5229]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd89b30950 a2=3 a3=0 items=0 ppid=1 pid=5229 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:55.652000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:55.729485 kernel: audit: type=1327 audit(1734100075.652:567): proctitle=737368643A20636F7265205B707269765D Dec 13 14:27:55.683000 audit[5229]: USER_START pid=5229 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:55.689000 audit[5233]: CRED_ACQ pid=5233 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:55.795491 kernel: audit: type=1105 audit(1734100075.683:568): pid=5229 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:55.795672 kernel: audit: type=1103 audit(1734100075.689:569): pid=5233 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:55.942210 sshd[5229]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:55.943000 audit[5229]: USER_END pid=5229 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:55.954946 systemd[1]: sshd@27-10.128.0.74:22-139.178.68.195:36642.service: Deactivated successfully. Dec 13 14:27:55.957325 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:27:55.958272 systemd-logind[1316]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:27:55.959963 systemd-logind[1316]: Removed session 24. Dec 13 14:27:55.943000 audit[5229]: CRED_DISP pid=5229 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:56.001952 kernel: audit: type=1106 audit(1734100075.943:570): pid=5229 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:56.002149 kernel: audit: type=1104 audit(1734100075.943:571): pid=5229 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 14:27:55.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.128.0.74:22-139.178.68.195:36642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'