Dec 13 01:53:14.821270 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 01:53:14.821287 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:53:14.821298 kernel: BIOS-provided physical RAM map: Dec 13 01:53:14.821306 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:53:14.821313 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 01:53:14.821320 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 01:53:14.821326 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 01:53:14.821334 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 01:53:14.821341 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 01:53:14.821351 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 01:53:14.821358 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Dec 13 01:53:14.821363 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Dec 13 01:53:14.821373 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 01:53:14.821380 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 01:53:14.821390 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 01:53:14.821399 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 01:53:14.821405 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 01:53:14.821413 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:53:14.821420 kernel: NX (Execute Disable) protection: active Dec 13 01:53:14.821428 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Dec 13 01:53:14.821435 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Dec 13 01:53:14.821441 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Dec 13 01:53:14.821446 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Dec 13 01:53:14.821452 kernel: extended physical RAM map: Dec 13 01:53:14.821458 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:53:14.821469 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 01:53:14.821475 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 01:53:14.821480 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 01:53:14.821486 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 01:53:14.821492 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 01:53:14.821498 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 01:53:14.821504 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Dec 13 01:53:14.821510 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Dec 13 01:53:14.821515 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Dec 13 01:53:14.821521 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Dec 13 01:53:14.821527 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Dec 13 01:53:14.821534 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Dec 13 01:53:14.821540 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 01:53:14.821545 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 01:53:14.821551 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 01:53:14.821562 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 01:53:14.821569 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 01:53:14.821575 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:53:14.821582 kernel: efi: EFI v2.70 by EDK II Dec 13 01:53:14.821589 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Dec 13 01:53:14.821595 kernel: random: crng init done Dec 13 01:53:14.821601 kernel: SMBIOS 2.8 present. Dec 13 01:53:14.821608 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Dec 13 01:53:14.821614 kernel: Hypervisor detected: KVM Dec 13 01:53:14.821620 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:53:14.821627 kernel: kvm-clock: cpu 0, msr 6e19b001, primary cpu clock Dec 13 01:53:14.821633 kernel: kvm-clock: using sched offset of 4044322373 cycles Dec 13 01:53:14.821641 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:53:14.821648 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:53:14.821654 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:53:14.821662 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:53:14.821679 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Dec 13 01:53:14.821688 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:53:14.821696 kernel: Using GB pages for direct mapping Dec 13 01:53:14.821702 kernel: Secure boot disabled Dec 13 01:53:14.821709 kernel: ACPI: Early table checksum verification disabled Dec 13 01:53:14.821716 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 01:53:14.821723 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:53:14.821730 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:53:14.821736 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:53:14.821743 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 01:53:14.821749 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:53:14.821755 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:53:14.821762 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:53:14.821768 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:53:14.821776 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 01:53:14.821782 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 01:53:14.821789 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 01:53:14.821796 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 01:53:14.821802 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 01:53:14.821808 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 01:53:14.821815 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 01:53:14.821821 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 01:53:14.821828 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 01:53:14.821835 kernel: No NUMA configuration found Dec 13 01:53:14.821842 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Dec 13 01:53:14.821848 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Dec 13 01:53:14.821854 kernel: Zone ranges: Dec 13 01:53:14.821861 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:53:14.821867 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Dec 13 01:53:14.821873 kernel: Normal empty Dec 13 01:53:14.821880 kernel: Movable zone start for each node Dec 13 01:53:14.821886 kernel: Early memory node ranges Dec 13 01:53:14.821894 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:53:14.821900 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 01:53:14.821907 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 01:53:14.821913 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Dec 13 01:53:14.821919 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Dec 13 01:53:14.821926 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Dec 13 01:53:14.821932 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Dec 13 01:53:14.821938 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:53:14.821945 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:53:14.821951 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 01:53:14.821958 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:53:14.821965 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Dec 13 01:53:14.821971 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 13 01:53:14.821990 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Dec 13 01:53:14.821997 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:53:14.822003 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:53:14.822010 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:53:14.822016 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:53:14.822022 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:53:14.822030 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:53:14.822037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:53:14.822043 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:53:14.822049 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:53:14.822056 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:53:14.822062 kernel: TSC deadline timer available Dec 13 01:53:14.822068 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:53:14.822075 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:53:14.822081 kernel: kvm-guest: setup PV sched yield Dec 13 01:53:14.822089 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:53:14.822095 kernel: Booting paravirtualized kernel on KVM Dec 13 01:53:14.822106 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:53:14.822114 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:53:14.822121 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 01:53:14.822128 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 01:53:14.822135 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:53:14.822141 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 01:53:14.822148 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Dec 13 01:53:14.822155 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:53:14.822161 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:53:14.822168 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Dec 13 01:53:14.822177 kernel: Policy zone: DMA32 Dec 13 01:53:14.822185 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:53:14.822192 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:53:14.822199 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:53:14.822207 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:53:14.822214 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:53:14.822221 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 169308K reserved, 0K cma-reserved) Dec 13 01:53:14.822228 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:53:14.822235 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 01:53:14.822241 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 01:53:14.822248 kernel: rcu: Hierarchical RCU implementation. Dec 13 01:53:14.822255 kernel: rcu: RCU event tracing is enabled. Dec 13 01:53:14.822263 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:53:14.822271 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:53:14.822279 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:53:14.822287 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:53:14.822295 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:53:14.822303 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:53:14.822310 kernel: Console: colour dummy device 80x25 Dec 13 01:53:14.822316 kernel: printk: console [ttyS0] enabled Dec 13 01:53:14.822323 kernel: ACPI: Core revision 20210730 Dec 13 01:53:14.822330 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:53:14.822338 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:53:14.822345 kernel: x2apic enabled Dec 13 01:53:14.822352 kernel: Switched APIC routing to physical x2apic. Dec 13 01:53:14.822358 kernel: kvm-guest: setup PV IPIs Dec 13 01:53:14.822365 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:53:14.822372 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:53:14.822379 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:53:14.822386 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:53:14.822396 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:53:14.822404 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:53:14.822411 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:53:14.822417 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:53:14.822424 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:53:14.822431 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:53:14.822438 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:53:14.822445 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:53:14.822452 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:53:14.822459 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 01:53:14.822467 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:53:14.822474 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:53:14.822481 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:53:14.822488 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:53:14.822495 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 01:53:14.822501 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:53:14.822508 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:53:14.822515 kernel: LSM: Security Framework initializing Dec 13 01:53:14.822522 kernel: SELinux: Initializing. Dec 13 01:53:14.822530 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:53:14.822537 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:53:14.822544 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:53:14.822550 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:53:14.822557 kernel: ... version: 0 Dec 13 01:53:14.822564 kernel: ... bit width: 48 Dec 13 01:53:14.822571 kernel: ... generic registers: 6 Dec 13 01:53:14.822577 kernel: ... value mask: 0000ffffffffffff Dec 13 01:53:14.822584 kernel: ... max period: 00007fffffffffff Dec 13 01:53:14.822592 kernel: ... fixed-purpose events: 0 Dec 13 01:53:14.822599 kernel: ... event mask: 000000000000003f Dec 13 01:53:14.822605 kernel: signal: max sigframe size: 1776 Dec 13 01:53:14.822612 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:53:14.822619 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:53:14.822625 kernel: x86: Booting SMP configuration: Dec 13 01:53:14.822632 kernel: .... node #0, CPUs: #1 Dec 13 01:53:14.822639 kernel: kvm-clock: cpu 1, msr 6e19b041, secondary cpu clock Dec 13 01:53:14.822646 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 01:53:14.822654 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Dec 13 01:53:14.822660 kernel: #2 Dec 13 01:53:14.822674 kernel: kvm-clock: cpu 2, msr 6e19b081, secondary cpu clock Dec 13 01:53:14.822681 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 01:53:14.822688 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Dec 13 01:53:14.822694 kernel: #3 Dec 13 01:53:14.822701 kernel: kvm-clock: cpu 3, msr 6e19b0c1, secondary cpu clock Dec 13 01:53:14.822708 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 01:53:14.822714 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Dec 13 01:53:14.822721 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:53:14.822729 kernel: smpboot: Max logical packages: 1 Dec 13 01:53:14.822736 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:53:14.822743 kernel: devtmpfs: initialized Dec 13 01:53:14.822750 kernel: x86/mm: Memory block size: 128MB Dec 13 01:53:14.822757 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 01:53:14.822763 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 01:53:14.822770 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Dec 13 01:53:14.822777 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 01:53:14.822784 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 01:53:14.822795 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:53:14.822802 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:53:14.822809 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:53:14.822815 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:53:14.822822 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:53:14.822829 kernel: audit: type=2000 audit(1734054794.558:1): state=initialized audit_enabled=0 res=1 Dec 13 01:53:14.822836 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:53:14.822842 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:53:14.822850 kernel: cpuidle: using governor menu Dec 13 01:53:14.822857 kernel: ACPI: bus type PCI registered Dec 13 01:53:14.822864 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:53:14.822870 kernel: dca service started, version 1.12.1 Dec 13 01:53:14.822877 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:53:14.822884 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 01:53:14.822891 kernel: PCI: Using configuration type 1 for base access Dec 13 01:53:14.822898 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:53:14.822905 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:53:14.822914 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:53:14.822923 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:53:14.822932 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:53:14.822941 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:53:14.822950 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:53:14.822959 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 01:53:14.822968 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 01:53:14.822987 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 01:53:14.822997 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:53:14.823005 kernel: ACPI: Interpreter enabled Dec 13 01:53:14.823016 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:53:14.823025 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:53:14.823034 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:53:14.823043 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:53:14.823052 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:53:14.823186 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:53:14.823313 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:53:14.823419 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:53:14.823432 kernel: PCI host bridge to bus 0000:00 Dec 13 01:53:14.823529 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:53:14.823602 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:53:14.823663 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:53:14.823734 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:53:14.823793 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:53:14.823854 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Dec 13 01:53:14.823914 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:53:14.824008 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:53:14.824089 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:53:14.824161 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 01:53:14.824229 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 01:53:14.824295 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 01:53:14.824364 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 01:53:14.824431 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:53:14.824505 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:53:14.824576 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 01:53:14.824643 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 01:53:14.824721 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Dec 13 01:53:14.824794 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:53:14.824864 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 01:53:14.824935 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 01:53:14.825015 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Dec 13 01:53:14.825101 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:53:14.825180 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 01:53:14.825248 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 01:53:14.825323 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Dec 13 01:53:14.825389 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 01:53:14.825461 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:53:14.825530 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:53:14.825626 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:53:14.825715 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 01:53:14.825784 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 01:53:14.825858 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:53:14.825936 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 01:53:14.825946 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:53:14.825953 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:53:14.825960 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:53:14.825968 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:53:14.825990 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:53:14.825999 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:53:14.826009 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:53:14.826016 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:53:14.826025 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:53:14.826034 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:53:14.826042 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:53:14.826049 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:53:14.826057 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:53:14.826066 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:53:14.826075 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:53:14.826084 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:53:14.826091 kernel: iommu: Default domain type: Translated Dec 13 01:53:14.826098 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:53:14.826173 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:53:14.826240 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:53:14.826307 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:53:14.826318 kernel: vgaarb: loaded Dec 13 01:53:14.826327 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:53:14.826336 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:53:14.826347 kernel: PTP clock support registered Dec 13 01:53:14.826356 kernel: Registered efivars operations Dec 13 01:53:14.826365 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:53:14.826374 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:53:14.826381 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 01:53:14.826389 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Dec 13 01:53:14.826398 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Dec 13 01:53:14.826407 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Dec 13 01:53:14.826414 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Dec 13 01:53:14.826425 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Dec 13 01:53:14.826434 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:53:14.826442 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:53:14.826449 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:53:14.826456 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:53:14.826463 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:53:14.826470 kernel: pnp: PnP ACPI init Dec 13 01:53:14.826553 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:53:14.826566 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:53:14.826583 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:53:14.826591 kernel: NET: Registered PF_INET protocol family Dec 13 01:53:14.826598 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:53:14.826605 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:53:14.826612 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:53:14.826619 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:53:14.826626 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 01:53:14.826634 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:53:14.826641 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:53:14.826648 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:53:14.826657 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:53:14.826674 kernel: NET: Registered PF_XDP protocol family Dec 13 01:53:14.826757 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 01:53:14.826852 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 01:53:14.826932 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:53:14.827011 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:53:14.827072 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:53:14.827145 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:53:14.827208 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:53:14.827278 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Dec 13 01:53:14.827288 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:53:14.827296 kernel: Initialise system trusted keyrings Dec 13 01:53:14.827302 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:53:14.827309 kernel: Key type asymmetric registered Dec 13 01:53:14.827319 kernel: Asymmetric key parser 'x509' registered Dec 13 01:53:14.827326 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 01:53:14.827342 kernel: io scheduler mq-deadline registered Dec 13 01:53:14.827350 kernel: io scheduler kyber registered Dec 13 01:53:14.827357 kernel: io scheduler bfq registered Dec 13 01:53:14.827364 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:53:14.827372 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:53:14.827379 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:53:14.827386 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:53:14.827395 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:53:14.827402 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:53:14.827409 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:53:14.827416 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:53:14.827424 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:53:14.827513 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:53:14.827525 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:53:14.827589 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:53:14.827674 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:53:14 UTC (1734054794) Dec 13 01:53:14.827756 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:53:14.827767 kernel: efifb: probing for efifb Dec 13 01:53:14.827774 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 13 01:53:14.827782 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 13 01:53:14.827789 kernel: efifb: scrolling: redraw Dec 13 01:53:14.827796 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:53:14.827803 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 01:53:14.827810 kernel: fb0: EFI VGA frame buffer device Dec 13 01:53:14.827825 kernel: pstore: Registered efi as persistent store backend Dec 13 01:53:14.827832 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:53:14.827839 kernel: Segment Routing with IPv6 Dec 13 01:53:14.827847 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:53:14.827855 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:53:14.827862 kernel: Key type dns_resolver registered Dec 13 01:53:14.827871 kernel: IPI shorthand broadcast: enabled Dec 13 01:53:14.827878 kernel: sched_clock: Marking stable (414494654, 128286953)->(585687141, -42905534) Dec 13 01:53:14.827886 kernel: registered taskstats version 1 Dec 13 01:53:14.827904 kernel: Loading compiled-in X.509 certificates Dec 13 01:53:14.827912 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 01:53:14.827919 kernel: Key type .fscrypt registered Dec 13 01:53:14.827926 kernel: Key type fscrypt-provisioning registered Dec 13 01:53:14.827933 kernel: pstore: Using crash dump compression: deflate Dec 13 01:53:14.827942 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:53:14.827957 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:53:14.827967 kernel: ima: No architecture policies found Dec 13 01:53:14.827985 kernel: clk: Disabling unused clocks Dec 13 01:53:14.827993 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 01:53:14.828000 kernel: Write protecting the kernel read-only data: 28672k Dec 13 01:53:14.828010 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 01:53:14.828017 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 01:53:14.828025 kernel: Run /init as init process Dec 13 01:53:14.828044 kernel: with arguments: Dec 13 01:53:14.828051 kernel: /init Dec 13 01:53:14.828058 kernel: with environment: Dec 13 01:53:14.828065 kernel: HOME=/ Dec 13 01:53:14.828072 kernel: TERM=linux Dec 13 01:53:14.828079 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:53:14.828089 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:53:14.828109 systemd[1]: Detected virtualization kvm. Dec 13 01:53:14.828118 systemd[1]: Detected architecture x86-64. Dec 13 01:53:14.828126 systemd[1]: Running in initrd. Dec 13 01:53:14.828133 systemd[1]: No hostname configured, using default hostname. Dec 13 01:53:14.828141 systemd[1]: Hostname set to . Dec 13 01:53:14.828159 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:53:14.828166 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:53:14.828174 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:53:14.828181 systemd[1]: Reached target cryptsetup.target. Dec 13 01:53:14.828191 systemd[1]: Reached target paths.target. Dec 13 01:53:14.828198 systemd[1]: Reached target slices.target. Dec 13 01:53:14.828215 systemd[1]: Reached target swap.target. Dec 13 01:53:14.828223 systemd[1]: Reached target timers.target. Dec 13 01:53:14.828231 systemd[1]: Listening on iscsid.socket. Dec 13 01:53:14.828238 systemd[1]: Listening on iscsiuio.socket. Dec 13 01:53:14.828246 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 01:53:14.828254 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 01:53:14.828263 systemd[1]: Listening on systemd-journald.socket. Dec 13 01:53:14.828270 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:53:14.828279 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:53:14.828289 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:53:14.828299 systemd[1]: Reached target sockets.target. Dec 13 01:53:14.828308 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:53:14.828316 systemd[1]: Finished network-cleanup.service. Dec 13 01:53:14.828338 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:53:14.828348 systemd[1]: Starting systemd-journald.service... Dec 13 01:53:14.828358 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:53:14.828368 systemd[1]: Starting systemd-resolved.service... Dec 13 01:53:14.828378 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 01:53:14.828387 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:53:14.828396 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:53:14.828406 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:53:14.828416 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 01:53:14.828435 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 01:53:14.828444 kernel: audit: type=1130 audit(1734054794.822:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:14.828459 systemd-journald[197]: Journal started Dec 13 01:53:14.828498 systemd-journald[197]: Runtime Journal (/run/log/journal/a13cfc49f4084068b15b4d44cff2d0d3) is 6.0M, max 48.4M, 42.4M free. Dec 13 01:53:14.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:14.830004 systemd[1]: Started systemd-journald.service. Dec 13 01:53:14.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:14.833805 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:53:14.835459 kernel: audit: type=1130 audit(1734054794.829:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:14.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:14.835503 systemd-modules-load[198]: Inserted module 'overlay' Dec 13 01:53:14.839587 kernel: audit: type=1130 audit(1734054794.834:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:14.842655 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 01:53:14.843399 systemd-resolved[199]: Positive Trust Anchors: Dec 13 01:53:14.843407 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:53:14.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:14.843432 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:53:14.854789 kernel: audit: type=1130 audit(1734054794.843:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:14.845530 systemd-resolved[199]: Defaulting to hostname 'linux'. Dec 13 01:53:14.856229 systemd[1]: Starting dracut-cmdline.service... Dec 13 01:53:14.857788 systemd[1]: Started systemd-resolved.service. Dec 13 01:53:14.859544 systemd[1]: Reached target nss-lookup.target. Dec 13 01:53:14.863965 kernel: audit: type=1130 audit(1734054794.858:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:14.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:14.866114 dracut-cmdline[215]: dracut-dracut-053 Dec 13 01:53:14.868251 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:53:14.868275 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:53:14.873845 systemd-modules-load[198]: Inserted module 'br_netfilter' Dec 13 01:53:14.874781 kernel: Bridge firewalling registered Dec 13 01:53:14.890992 kernel: SCSI subsystem initialized Dec 13 01:53:14.902099 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:53:14.902122 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:53:14.903426 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 01:53:14.906089 systemd-modules-load[198]: Inserted module 'dm_multipath' Dec 13 01:53:14.906914 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:53:14.912357 kernel: audit: type=1130 audit(1734054794.906:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:14.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:14.911038 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:53:14.914034 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:53:14.918640 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:53:14.923064 kernel: audit: type=1130 audit(1734054794.918:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:14.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:14.933000 kernel: iscsi: registered transport (tcp) Dec 13 01:53:14.954024 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:53:14.954048 kernel: QLogic iSCSI HBA Driver Dec 13 01:53:14.978531 systemd[1]: Finished dracut-cmdline.service. Dec 13 01:53:14.983660 kernel: audit: type=1130 audit(1734054794.979:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:14.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:14.980025 systemd[1]: Starting dracut-pre-udev.service... Dec 13 01:53:15.024004 kernel: raid6: avx2x4 gen() 30771 MB/s Dec 13 01:53:15.040999 kernel: raid6: avx2x4 xor() 7732 MB/s Dec 13 01:53:15.057999 kernel: raid6: avx2x2 gen() 32525 MB/s Dec 13 01:53:15.074998 kernel: raid6: avx2x2 xor() 19177 MB/s Dec 13 01:53:15.091997 kernel: raid6: avx2x1 gen() 26518 MB/s Dec 13 01:53:15.109002 kernel: raid6: avx2x1 xor() 15243 MB/s Dec 13 01:53:15.126004 kernel: raid6: sse2x4 gen() 14485 MB/s Dec 13 01:53:15.142999 kernel: raid6: sse2x4 xor() 7313 MB/s Dec 13 01:53:15.159998 kernel: raid6: sse2x2 gen() 16001 MB/s Dec 13 01:53:15.176998 kernel: raid6: sse2x2 xor() 9666 MB/s Dec 13 01:53:15.193999 kernel: raid6: sse2x1 gen() 12041 MB/s Dec 13 01:53:15.211460 kernel: raid6: sse2x1 xor() 7606 MB/s Dec 13 01:53:15.211472 kernel: raid6: using algorithm avx2x2 gen() 32525 MB/s Dec 13 01:53:15.211482 kernel: raid6: .... xor() 19177 MB/s, rmw enabled Dec 13 01:53:15.212187 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:53:15.223994 kernel: xor: automatically using best checksumming function avx Dec 13 01:53:15.312031 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 01:53:15.318508 systemd[1]: Finished dracut-pre-udev.service. Dec 13 01:53:15.323091 kernel: audit: type=1130 audit(1734054795.318:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:15.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:15.321000 audit: BPF prog-id=7 op=LOAD Dec 13 01:53:15.322000 audit: BPF prog-id=8 op=LOAD Dec 13 01:53:15.323326 systemd[1]: Starting systemd-udevd.service... Dec 13 01:53:15.335351 systemd-udevd[400]: Using default interface naming scheme 'v252'. Dec 13 01:53:15.339065 systemd[1]: Started systemd-udevd.service. Dec 13 01:53:15.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:15.341609 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 01:53:15.352069 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Dec 13 01:53:15.373563 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 01:53:15.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:15.375838 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:53:15.406418 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:53:15.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:15.428602 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:53:15.449128 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:53:15.449142 kernel: libata version 3.00 loaded. Dec 13 01:53:15.449151 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:53:15.449160 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:53:15.449168 kernel: GPT:9289727 != 19775487 Dec 13 01:53:15.449176 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:53:15.449190 kernel: GPT:9289727 != 19775487 Dec 13 01:53:15.449197 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:53:15.449206 kernel: AES CTR mode by8 optimization enabled Dec 13 01:53:15.449214 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:53:15.453993 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:53:15.481722 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:53:15.481738 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:53:15.481828 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:53:15.481907 kernel: scsi host0: ahci Dec 13 01:53:15.482015 kernel: scsi host1: ahci Dec 13 01:53:15.482095 kernel: scsi host2: ahci Dec 13 01:53:15.482173 kernel: scsi host3: ahci Dec 13 01:53:15.482255 kernel: scsi host4: ahci Dec 13 01:53:15.482333 kernel: scsi host5: ahci Dec 13 01:53:15.482416 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Dec 13 01:53:15.482426 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Dec 13 01:53:15.482435 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Dec 13 01:53:15.482443 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (459) Dec 13 01:53:15.482452 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Dec 13 01:53:15.482461 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Dec 13 01:53:15.482469 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Dec 13 01:53:15.476985 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 01:53:15.483664 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 01:53:15.493800 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 01:53:15.505084 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 01:53:15.509973 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:53:15.512351 systemd[1]: Starting disk-uuid.service... Dec 13 01:53:15.532995 disk-uuid[547]: Primary Header is updated. Dec 13 01:53:15.532995 disk-uuid[547]: Secondary Entries is updated. Dec 13 01:53:15.532995 disk-uuid[547]: Secondary Header is updated. Dec 13 01:53:15.536292 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:53:15.795780 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:53:15.795846 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:53:15.795856 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:53:15.795864 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:53:15.795872 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:53:15.797004 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:53:15.798259 kernel: ata3.00: applying bridge limits Dec 13 01:53:15.798997 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:53:15.799994 kernel: ata3.00: configured for UDMA/100 Dec 13 01:53:15.802002 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:53:15.829108 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:53:15.846653 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:53:15.846666 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:53:16.542003 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:53:16.542101 disk-uuid[548]: The operation has completed successfully. Dec 13 01:53:16.566059 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:53:16.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:16.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:16.566135 systemd[1]: Finished disk-uuid.service. Dec 13 01:53:16.567806 systemd[1]: Starting verity-setup.service... Dec 13 01:53:16.581001 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:53:16.598372 systemd[1]: Found device dev-mapper-usr.device. Dec 13 01:53:16.599666 systemd[1]: Mounting sysusr-usr.mount... Dec 13 01:53:16.601936 systemd[1]: Finished verity-setup.service. Dec 13 01:53:16.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:16.656002 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 01:53:16.656411 systemd[1]: Mounted sysusr-usr.mount. Dec 13 01:53:16.657257 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 01:53:16.657821 systemd[1]: Starting ignition-setup.service... Dec 13 01:53:16.658917 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 01:53:16.666626 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:53:16.666683 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:53:16.666696 kernel: BTRFS info (device vda6): has skinny extents Dec 13 01:53:16.675273 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:53:16.682321 systemd[1]: Finished ignition-setup.service. Dec 13 01:53:16.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:16.683772 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 01:53:16.716398 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 01:53:16.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:16.717000 audit: BPF prog-id=9 op=LOAD Dec 13 01:53:16.718857 systemd[1]: Starting systemd-networkd.service... Dec 13 01:53:16.720323 ignition[665]: Ignition 2.14.0 Dec 13 01:53:16.720330 ignition[665]: Stage: fetch-offline Dec 13 01:53:16.720365 ignition[665]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:16.720372 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:53:16.720463 ignition[665]: parsed url from cmdline: "" Dec 13 01:53:16.720467 ignition[665]: no config URL provided Dec 13 01:53:16.720472 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:53:16.720479 ignition[665]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:53:16.720495 ignition[665]: op(1): [started] loading QEMU firmware config module Dec 13 01:53:16.720500 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:53:16.724575 ignition[665]: op(1): [finished] loading QEMU firmware config module Dec 13 01:53:16.741458 systemd-networkd[730]: lo: Link UP Dec 13 01:53:16.741469 systemd-networkd[730]: lo: Gained carrier Dec 13 01:53:16.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:16.741841 systemd-networkd[730]: Enumeration completed Dec 13 01:53:16.741901 systemd[1]: Started systemd-networkd.service. Dec 13 01:53:16.742029 systemd-networkd[730]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:53:16.743138 systemd-networkd[730]: eth0: Link UP Dec 13 01:53:16.743140 systemd-networkd[730]: eth0: Gained carrier Dec 13 01:53:16.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:16.743804 systemd[1]: Reached target network.target. Dec 13 01:53:16.746030 systemd[1]: Starting iscsiuio.service... Dec 13 01:53:16.749470 systemd[1]: Started iscsiuio.service. Dec 13 01:53:16.752069 systemd[1]: Starting iscsid.service... Dec 13 01:53:16.756946 iscsid[737]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:53:16.756946 iscsid[737]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 01:53:16.756946 iscsid[737]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 01:53:16.756946 iscsid[737]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 01:53:16.756946 iscsid[737]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:53:16.756946 iscsid[737]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 01:53:16.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:16.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:16.755647 systemd[1]: Started iscsid.service. Dec 13 01:53:16.757491 systemd[1]: Starting dracut-initqueue.service... Dec 13 01:53:16.765232 systemd[1]: Finished dracut-initqueue.service. Dec 13 01:53:16.767206 systemd[1]: Reached target remote-fs-pre.target. Dec 13 01:53:16.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:16.769354 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:53:16.771879 systemd[1]: Reached target remote-fs.target. Dec 13 01:53:16.772797 systemd[1]: Starting dracut-pre-mount.service... Dec 13 01:53:16.778688 systemd[1]: Finished dracut-pre-mount.service. Dec 13 01:53:16.803561 ignition[665]: parsing config with SHA512: a23e5201eb51d8347c3337b0d94a91549841025acd011e18211a35ce0a0a49beda9fe2fbd77b64b3177255e56bac41087a55f19ddc6174ca7bb31e86b14d88be Dec 13 01:53:16.810293 unknown[665]: fetched base config from "system" Dec 13 01:53:16.810305 unknown[665]: fetched user config from "qemu" Dec 13 01:53:16.810778 ignition[665]: fetch-offline: fetch-offline passed Dec 13 01:53:16.811046 systemd-networkd[730]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:53:16.810824 ignition[665]: Ignition finished successfully Dec 13 01:53:16.815601 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 01:53:16.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:16.816601 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:53:16.817182 systemd[1]: Starting ignition-kargs.service... Dec 13 01:53:16.824713 ignition[751]: Ignition 2.14.0 Dec 13 01:53:16.824723 ignition[751]: Stage: kargs Dec 13 01:53:16.824802 ignition[751]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:16.824811 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:53:16.827147 systemd[1]: Finished ignition-kargs.service. Dec 13 01:53:16.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:16.825696 ignition[751]: kargs: kargs passed Dec 13 01:53:16.829571 systemd[1]: Starting ignition-disks.service... Dec 13 01:53:16.825724 ignition[751]: Ignition finished successfully Dec 13 01:53:16.835398 ignition[757]: Ignition 2.14.0 Dec 13 01:53:16.835409 ignition[757]: Stage: disks Dec 13 01:53:16.835501 ignition[757]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:16.835510 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:53:16.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:16.837024 systemd[1]: Finished ignition-disks.service. Dec 13 01:53:16.836422 ignition[757]: disks: disks passed Dec 13 01:53:16.838581 systemd[1]: Reached target initrd-root-device.target. Dec 13 01:53:16.836455 ignition[757]: Ignition finished successfully Dec 13 01:53:16.840450 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:53:16.841297 systemd[1]: Reached target local-fs.target. Dec 13 01:53:16.842813 systemd[1]: Reached target sysinit.target. Dec 13 01:53:16.843218 systemd[1]: Reached target basic.target. Dec 13 01:53:16.844051 systemd[1]: Starting systemd-fsck-root.service... Dec 13 01:53:16.865234 systemd-fsck[765]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 01:53:17.004108 systemd[1]: Finished systemd-fsck-root.service. Dec 13 01:53:17.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:17.005797 systemd[1]: Mounting sysroot.mount... Dec 13 01:53:17.024007 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 01:53:17.024081 systemd[1]: Mounted sysroot.mount. Dec 13 01:53:17.025439 systemd[1]: Reached target initrd-root-fs.target. Dec 13 01:53:17.026627 systemd[1]: Mounting sysroot-usr.mount... Dec 13 01:53:17.028083 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 01:53:17.028112 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:53:17.028131 systemd[1]: Reached target ignition-diskful.target. Dec 13 01:53:17.030124 systemd[1]: Mounted sysroot-usr.mount. Dec 13 01:53:17.032091 systemd[1]: Starting initrd-setup-root.service... Dec 13 01:53:17.036802 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:53:17.039406 initrd-setup-root[783]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:53:17.042507 initrd-setup-root[791]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:53:17.045513 initrd-setup-root[799]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:53:17.068854 systemd[1]: Finished initrd-setup-root.service. Dec 13 01:53:17.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:17.070046 systemd[1]: Starting ignition-mount.service... Dec 13 01:53:17.071752 systemd[1]: Starting sysroot-boot.service... Dec 13 01:53:17.074322 bash[816]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 01:53:17.081527 ignition[817]: INFO : Ignition 2.14.0 Dec 13 01:53:17.081527 ignition[817]: INFO : Stage: mount Dec 13 01:53:17.083197 ignition[817]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:17.083197 ignition[817]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:53:17.083197 ignition[817]: INFO : mount: mount passed Dec 13 01:53:17.083197 ignition[817]: INFO : Ignition finished successfully Dec 13 01:53:17.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:17.083143 systemd[1]: Finished ignition-mount.service. Dec 13 01:53:17.092663 systemd[1]: Finished sysroot-boot.service. Dec 13 01:53:17.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:17.608060 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 01:53:17.614596 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (826) Dec 13 01:53:17.614624 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:53:17.614634 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:53:17.615434 kernel: BTRFS info (device vda6): has skinny extents Dec 13 01:53:17.619169 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 01:53:17.620179 systemd[1]: Starting ignition-files.service... Dec 13 01:53:17.632168 ignition[846]: INFO : Ignition 2.14.0 Dec 13 01:53:17.632168 ignition[846]: INFO : Stage: files Dec 13 01:53:17.633934 ignition[846]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:17.633934 ignition[846]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:53:17.633934 ignition[846]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:53:17.637373 ignition[846]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:53:17.637373 ignition[846]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:53:17.640148 ignition[846]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:53:17.640148 ignition[846]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:53:17.640148 ignition[846]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:53:17.640148 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:53:17.640148 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:53:17.640148 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:53:17.640148 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:53:17.639309 unknown[846]: wrote ssh authorized keys file for user: core Dec 13 01:53:17.795003 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:53:17.892902 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:53:17.892902 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:53:17.896742 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:53:17.896742 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:53:17.900066 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:53:17.900066 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:53:17.903409 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:53:17.905107 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:53:17.906841 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:53:17.908596 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:53:17.910331 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:53:17.912021 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:53:17.914653 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:53:17.917130 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:53:17.919269 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:53:18.202099 systemd-networkd[730]: eth0: Gained IPv6LL Dec 13 01:53:18.403503 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:53:18.877945 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:53:18.877945 ignition[846]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 01:53:18.882220 ignition[846]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:53:18.882220 ignition[846]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:53:18.882220 ignition[846]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 01:53:18.882220 ignition[846]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 01:53:18.882220 ignition[846]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:53:18.882220 ignition[846]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:53:18.882220 ignition[846]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 01:53:18.882220 ignition[846]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Dec 13 01:53:18.882220 ignition[846]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:53:18.882220 ignition[846]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:53:18.882220 ignition[846]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Dec 13 01:53:18.882220 ignition[846]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:53:18.882220 ignition[846]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:53:18.882220 ignition[846]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:53:18.882220 ignition[846]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:53:18.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:18.912426 ignition[846]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:53:18.912426 ignition[846]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:53:18.912426 ignition[846]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:53:18.912426 ignition[846]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:53:18.912426 ignition[846]: INFO : files: files passed Dec 13 01:53:18.912426 ignition[846]: INFO : Ignition finished successfully Dec 13 01:53:18.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:18.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:18.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:18.906175 systemd[1]: Finished ignition-files.service. Dec 13 01:53:18.908438 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 01:53:18.925745 initrd-setup-root-after-ignition[872]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 01:53:18.910071 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 01:53:18.929227 initrd-setup-root-after-ignition[874]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:53:18.910729 systemd[1]: Starting ignition-quench.service... Dec 13 01:53:18.913212 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:53:18.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:18.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:18.913281 systemd[1]: Finished ignition-quench.service. Dec 13 01:53:18.915879 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 01:53:18.917249 systemd[1]: Reached target ignition-complete.target. Dec 13 01:53:18.920243 systemd[1]: Starting initrd-parse-etc.service... Dec 13 01:53:18.931598 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:53:18.931668 systemd[1]: Finished initrd-parse-etc.service. Dec 13 01:53:18.932687 systemd[1]: Reached target initrd-fs.target. Dec 13 01:53:18.934275 systemd[1]: Reached target initrd.target. Dec 13 01:53:18.935094 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 01:53:18.935630 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 01:53:18.943921 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 01:53:18.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:18.946354 systemd[1]: Starting initrd-cleanup.service... Dec 13 01:53:18.953885 systemd[1]: Stopped target nss-lookup.target. Dec 13 01:53:18.955568 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 01:53:18.956056 systemd[1]: Stopped target timers.target. Dec 13 01:53:18.956358 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:53:18.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:18.956450 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 01:53:18.959366 systemd[1]: Stopped target initrd.target. Dec 13 01:53:18.960963 systemd[1]: Stopped target basic.target. Dec 13 01:53:18.961561 systemd[1]: Stopped target ignition-complete.target. Dec 13 01:53:18.963404 systemd[1]: Stopped target ignition-diskful.target. Dec 13 01:53:18.965243 systemd[1]: Stopped target initrd-root-device.target. Dec 13 01:53:18.966852 systemd[1]: Stopped target remote-fs.target. Dec 13 01:53:18.967404 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 01:53:18.969475 systemd[1]: Stopped target sysinit.target. Dec 13 01:53:18.969798 systemd[1]: Stopped target local-fs.target. Dec 13 01:53:18.972295 systemd[1]: Stopped target local-fs-pre.target. Dec 13 01:53:18.974046 systemd[1]: Stopped target swap.target. Dec 13 01:53:18.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:18.975412 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:53:18.975499 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 01:53:18.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:18.977059 systemd[1]: Stopped target cryptsetup.target. Dec 13 01:53:18.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:18.978355 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:53:18.978429 systemd[1]: Stopped dracut-initqueue.service. Dec 13 01:53:18.980053 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:53:18.980129 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 01:53:18.981563 systemd[1]: Stopped target paths.target. Dec 13 01:53:18.982866 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:53:18.984050 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 01:53:18.985392 systemd[1]: Stopped target slices.target. Dec 13 01:53:18.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:18.987072 systemd[1]: Stopped target sockets.target. Dec 13 01:53:18.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:18.988546 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:53:18.988617 systemd[1]: Closed iscsid.socket. Dec 13 01:53:18.989831 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:53:18.989889 systemd[1]: Closed iscsiuio.socket. Dec 13 01:53:18.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:18.990410 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:53:18.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.001869 ignition[887]: INFO : Ignition 2.14.0 Dec 13 01:53:19.001869 ignition[887]: INFO : Stage: umount Dec 13 01:53:19.001869 ignition[887]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:19.001869 ignition[887]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:53:19.001869 ignition[887]: INFO : umount: umount passed Dec 13 01:53:19.001869 ignition[887]: INFO : Ignition finished successfully Dec 13 01:53:19.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:18.990490 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 01:53:19.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:18.992629 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:53:18.992704 systemd[1]: Stopped ignition-files.service. Dec 13 01:53:18.995094 systemd[1]: Stopping ignition-mount.service... Dec 13 01:53:18.996676 systemd[1]: Stopping sysroot-boot.service... Dec 13 01:53:18.997408 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:53:18.997508 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 01:53:18.999170 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:53:18.999287 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 01:53:19.002653 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:53:19.002723 systemd[1]: Stopped ignition-mount.service. Dec 13 01:53:19.003931 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:53:19.004014 systemd[1]: Finished initrd-cleanup.service. Dec 13 01:53:19.004968 systemd[1]: Stopped target network.target. Dec 13 01:53:19.006225 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:53:19.006270 systemd[1]: Stopped ignition-disks.service. Dec 13 01:53:19.008094 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:53:19.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.008130 systemd[1]: Stopped ignition-kargs.service. Dec 13 01:53:19.008486 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:53:19.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.008521 systemd[1]: Stopped ignition-setup.service. Dec 13 01:53:19.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.011379 systemd[1]: Stopping systemd-networkd.service... Dec 13 01:53:19.012998 systemd[1]: Stopping systemd-resolved.service... Dec 13 01:53:19.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.015972 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:53:19.025021 systemd-networkd[730]: eth0: DHCPv6 lease lost Dec 13 01:53:19.038000 audit: BPF prog-id=9 op=UNLOAD Dec 13 01:53:19.025825 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:53:19.025895 systemd[1]: Stopped systemd-networkd.service. Dec 13 01:53:19.027802 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:53:19.042000 audit: BPF prog-id=6 op=UNLOAD Dec 13 01:53:19.027827 systemd[1]: Closed systemd-networkd.socket. Dec 13 01:53:19.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.029081 systemd[1]: Stopping network-cleanup.service... Dec 13 01:53:19.029804 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:53:19.029840 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 01:53:19.032904 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:53:19.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.032934 systemd[1]: Stopped systemd-sysctl.service. Dec 13 01:53:19.034579 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:53:19.034610 systemd[1]: Stopped systemd-modules-load.service. Dec 13 01:53:19.035567 systemd[1]: Stopping systemd-udevd.service... Dec 13 01:53:19.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.037029 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 01:53:19.037395 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:53:19.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.037467 systemd[1]: Stopped systemd-resolved.service. Dec 13 01:53:19.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.042796 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:53:19.042890 systemd[1]: Stopped network-cleanup.service. Dec 13 01:53:19.047287 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:53:19.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.047386 systemd[1]: Stopped systemd-udevd.service. Dec 13 01:53:19.050154 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:53:19.050192 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 01:53:19.051849 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:53:19.051878 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 01:53:19.053694 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:53:19.053733 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 01:53:19.055375 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:53:19.055416 systemd[1]: Stopped dracut-cmdline.service. Dec 13 01:53:19.056956 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:53:19.057021 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 01:53:19.059582 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 01:53:19.060755 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:53:19.060792 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 01:53:19.061790 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:53:19.061824 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 01:53:19.063315 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:53:19.063346 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 01:53:19.064971 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 01:53:19.065351 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:53:19.065413 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 01:53:19.107069 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:53:19.107154 systemd[1]: Stopped sysroot-boot.service. Dec 13 01:53:19.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.108905 systemd[1]: Reached target initrd-switch-root.target. Dec 13 01:53:19.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:19.110370 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:53:19.110405 systemd[1]: Stopped initrd-setup-root.service. Dec 13 01:53:19.111387 systemd[1]: Starting initrd-switch-root.service... Dec 13 01:53:19.116567 systemd[1]: Switching root. Dec 13 01:53:19.118000 audit: BPF prog-id=8 op=UNLOAD Dec 13 01:53:19.118000 audit: BPF prog-id=7 op=UNLOAD Dec 13 01:53:19.118000 audit: BPF prog-id=5 op=UNLOAD Dec 13 01:53:19.118000 audit: BPF prog-id=4 op=UNLOAD Dec 13 01:53:19.119000 audit: BPF prog-id=3 op=UNLOAD Dec 13 01:53:19.135304 iscsid[737]: iscsid shutting down. Dec 13 01:53:19.136043 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Dec 13 01:53:19.136098 systemd-journald[197]: Journal stopped Dec 13 01:53:21.884938 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 01:53:21.885018 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 01:53:21.885033 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 01:53:21.885043 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:53:21.885053 kernel: SELinux: policy capability open_perms=1 Dec 13 01:53:21.885062 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:53:21.885071 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:53:21.885082 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:53:21.885092 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:53:21.885107 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:53:21.885117 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:53:21.885126 kernel: kauditd_printk_skb: 69 callbacks suppressed Dec 13 01:53:21.885139 kernel: audit: type=1403 audit(1734054799.217:80): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:53:21.885150 systemd[1]: Successfully loaded SELinux policy in 39.621ms. Dec 13 01:53:21.885165 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.607ms. Dec 13 01:53:21.885178 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:53:21.885188 systemd[1]: Detected virtualization kvm. Dec 13 01:53:21.885198 systemd[1]: Detected architecture x86-64. Dec 13 01:53:21.885208 systemd[1]: Detected first boot. Dec 13 01:53:21.885217 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:53:21.885227 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 01:53:21.885237 kernel: audit: type=1400 audit(1734054799.505:81): avc: denied { associate } for pid=939 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 01:53:21.885252 kernel: audit: type=1300 audit(1734054799.505:81): arch=c000003e syscall=188 success=yes exit=0 a0=c0001076c2 a1=c00002cb40 a2=c00002aa40 a3=32 items=0 ppid=922 pid=939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:21.885263 kernel: audit: type=1327 audit(1734054799.505:81): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 01:53:21.885273 kernel: audit: type=1400 audit(1734054799.508:82): avc: denied { associate } for pid=939 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 01:53:21.885284 kernel: audit: type=1300 audit(1734054799.508:82): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000107799 a2=1ed a3=0 items=2 ppid=922 pid=939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:21.885293 kernel: audit: type=1307 audit(1734054799.508:82): cwd="/" Dec 13 01:53:21.885306 kernel: audit: type=1302 audit(1734054799.508:82): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:21.885316 kernel: audit: type=1302 audit(1734054799.508:82): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:21.885328 kernel: audit: type=1327 audit(1734054799.508:82): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 01:53:21.885337 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:53:21.885348 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:53:21.885361 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:53:21.885375 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:53:21.885385 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:53:21.885395 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 01:53:21.885405 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 01:53:21.885416 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 01:53:21.885426 systemd[1]: Created slice system-getty.slice. Dec 13 01:53:21.885435 systemd[1]: Created slice system-modprobe.slice. Dec 13 01:53:21.885447 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 01:53:21.885457 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 01:53:21.885468 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 01:53:21.885486 systemd[1]: Created slice user.slice. Dec 13 01:53:21.885496 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:53:21.885506 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 01:53:21.885516 systemd[1]: Set up automount boot.automount. Dec 13 01:53:21.885527 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 01:53:21.885538 systemd[1]: Reached target integritysetup.target. Dec 13 01:53:21.885548 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:53:21.885558 systemd[1]: Reached target remote-fs.target. Dec 13 01:53:21.885568 systemd[1]: Reached target slices.target. Dec 13 01:53:21.885578 systemd[1]: Reached target swap.target. Dec 13 01:53:21.885588 systemd[1]: Reached target torcx.target. Dec 13 01:53:21.885598 systemd[1]: Reached target veritysetup.target. Dec 13 01:53:21.885610 systemd[1]: Listening on systemd-coredump.socket. Dec 13 01:53:21.885623 systemd[1]: Listening on systemd-initctl.socket. Dec 13 01:53:21.885637 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 01:53:21.885650 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 01:53:21.885663 systemd[1]: Listening on systemd-journald.socket. Dec 13 01:53:21.885675 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:53:21.885688 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:53:21.885701 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:53:21.885711 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 01:53:21.885721 systemd[1]: Mounting dev-hugepages.mount... Dec 13 01:53:21.885731 systemd[1]: Mounting dev-mqueue.mount... Dec 13 01:53:21.885742 systemd[1]: Mounting media.mount... Dec 13 01:53:21.885754 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:53:21.885764 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 01:53:21.885775 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 01:53:21.885785 systemd[1]: Mounting tmp.mount... Dec 13 01:53:21.885795 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 01:53:21.885805 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:53:21.885815 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:53:21.885824 systemd[1]: Starting modprobe@configfs.service... Dec 13 01:53:21.885835 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:53:21.885847 systemd[1]: Starting modprobe@drm.service... Dec 13 01:53:21.885857 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:53:21.885868 systemd[1]: Starting modprobe@fuse.service... Dec 13 01:53:21.885878 systemd[1]: Starting modprobe@loop.service... Dec 13 01:53:21.885888 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:53:21.885898 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:53:21.885908 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:53:21.885918 systemd[1]: Starting systemd-journald.service... Dec 13 01:53:21.885928 kernel: loop: module loaded Dec 13 01:53:21.885939 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:53:21.885949 systemd[1]: Starting systemd-network-generator.service... Dec 13 01:53:21.885959 kernel: fuse: init (API version 7.34) Dec 13 01:53:21.885969 systemd[1]: Starting systemd-remount-fs.service... Dec 13 01:53:21.885991 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:53:21.886001 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:53:21.886012 systemd[1]: Mounted dev-hugepages.mount. Dec 13 01:53:21.886021 systemd[1]: Mounted dev-mqueue.mount. Dec 13 01:53:21.886031 systemd[1]: Mounted media.mount. Dec 13 01:53:21.886043 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 01:53:21.886055 systemd-journald[1035]: Journal started Dec 13 01:53:21.886090 systemd-journald[1035]: Runtime Journal (/run/log/journal/a13cfc49f4084068b15b4d44cff2d0d3) is 6.0M, max 48.4M, 42.4M free. Dec 13 01:53:21.882000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 01:53:21.882000 audit[1035]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffd3c65f90 a2=4000 a3=7fffd3c6602c items=0 ppid=1 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:21.882000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 01:53:21.888041 systemd[1]: Started systemd-journald.service. Dec 13 01:53:21.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.889327 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 01:53:21.890235 systemd[1]: Mounted tmp.mount. Dec 13 01:53:21.891268 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:53:21.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.892337 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:53:21.892507 systemd[1]: Finished modprobe@configfs.service. Dec 13 01:53:21.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.893569 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:53:21.893725 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:53:21.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.894872 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:53:21.895031 systemd[1]: Finished modprobe@drm.service. Dec 13 01:53:21.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.896332 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 01:53:21.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.897440 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:53:21.897574 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:53:21.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.898733 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:53:21.898925 systemd[1]: Finished modprobe@fuse.service. Dec 13 01:53:21.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.900018 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:53:21.900188 systemd[1]: Finished modprobe@loop.service. Dec 13 01:53:21.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.901553 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:53:21.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.902885 systemd[1]: Finished systemd-network-generator.service. Dec 13 01:53:21.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.904325 systemd[1]: Finished systemd-remount-fs.service. Dec 13 01:53:21.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.905727 systemd[1]: Reached target network-pre.target. Dec 13 01:53:21.907937 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 01:53:21.909850 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 01:53:21.910710 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:53:21.911966 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 01:53:21.913639 systemd[1]: Starting systemd-journal-flush.service... Dec 13 01:53:21.914620 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:53:21.915462 systemd[1]: Starting systemd-random-seed.service... Dec 13 01:53:21.920028 systemd-journald[1035]: Time spent on flushing to /var/log/journal/a13cfc49f4084068b15b4d44cff2d0d3 is 13.445ms for 1091 entries. Dec 13 01:53:21.920028 systemd-journald[1035]: System Journal (/var/log/journal/a13cfc49f4084068b15b4d44cff2d0d3) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:53:22.004406 systemd-journald[1035]: Received client request to flush runtime journal. Dec 13 01:53:21.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:21.919269 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:53:21.920206 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:53:21.923099 systemd[1]: Starting systemd-sysusers.service... Dec 13 01:53:21.926846 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:53:22.005083 udevadm[1071]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:53:21.927858 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 01:53:21.928791 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 01:53:21.930436 systemd[1]: Starting systemd-udev-settle.service... Dec 13 01:53:21.967217 systemd[1]: Finished systemd-sysusers.service. Dec 13 01:53:21.968532 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:53:21.970549 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:53:21.982689 systemd[1]: Finished systemd-random-seed.service. Dec 13 01:53:21.983861 systemd[1]: Reached target first-boot-complete.target. Dec 13 01:53:21.987214 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:53:22.005213 systemd[1]: Finished systemd-journal-flush.service. Dec 13 01:53:22.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:22.411730 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 01:53:22.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:22.414240 systemd[1]: Starting systemd-udevd.service... Dec 13 01:53:22.435088 systemd-udevd[1082]: Using default interface naming scheme 'v252'. Dec 13 01:53:22.447229 systemd[1]: Started systemd-udevd.service. Dec 13 01:53:22.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:22.450282 systemd[1]: Starting systemd-networkd.service... Dec 13 01:53:22.466495 systemd[1]: Starting systemd-userdbd.service... Dec 13 01:53:22.488789 systemd[1]: Found device dev-ttyS0.device. Dec 13 01:53:22.506436 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:53:22.513114 systemd[1]: Started systemd-userdbd.service. Dec 13 01:53:22.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:22.533017 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:53:22.546003 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:53:22.554000 audit[1099]: AVC avc: denied { confidentiality } for pid=1099 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:53:22.578470 systemd-networkd[1092]: lo: Link UP Dec 13 01:53:22.578485 systemd-networkd[1092]: lo: Gained carrier Dec 13 01:53:22.578909 systemd-networkd[1092]: Enumeration completed Dec 13 01:53:22.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:22.579030 systemd[1]: Started systemd-networkd.service. Dec 13 01:53:22.579098 systemd-networkd[1092]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:53:22.580559 systemd-networkd[1092]: eth0: Link UP Dec 13 01:53:22.580572 systemd-networkd[1092]: eth0: Gained carrier Dec 13 01:53:22.554000 audit[1099]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555f438ed8e0 a1=337fc a2=7f763f5e4bc5 a3=5 items=110 ppid=1082 pid=1099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:22.554000 audit: CWD cwd="/" Dec 13 01:53:22.554000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=1 name=(null) inode=15859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=2 name=(null) inode=15859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=3 name=(null) inode=15860 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=4 name=(null) inode=15859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=5 name=(null) inode=15861 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=6 name=(null) inode=15859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=7 name=(null) inode=15862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=8 name=(null) inode=15862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=9 name=(null) inode=15863 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=10 name=(null) inode=15862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=11 name=(null) inode=15864 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=12 name=(null) inode=15862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=13 name=(null) inode=15865 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=14 name=(null) inode=15862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=15 name=(null) inode=15866 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=16 name=(null) inode=15862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=17 name=(null) inode=15867 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=18 name=(null) inode=15859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=19 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=20 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=21 name=(null) inode=15869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=22 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=23 name=(null) inode=15870 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=24 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=25 name=(null) inode=15871 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=26 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=27 name=(null) inode=15872 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=28 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=29 name=(null) inode=15873 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=30 name=(null) inode=15859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=31 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=32 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=33 name=(null) inode=15875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=34 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=35 name=(null) inode=15876 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=36 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=37 name=(null) inode=15877 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=38 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=39 name=(null) inode=15878 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=40 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=41 name=(null) inode=15879 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=42 name=(null) inode=15859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=43 name=(null) inode=15880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=44 name=(null) inode=15880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=45 name=(null) inode=15881 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=46 name=(null) inode=15880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=47 name=(null) inode=15882 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=48 name=(null) inode=15880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=49 name=(null) inode=15883 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=50 name=(null) inode=15880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=51 name=(null) inode=15884 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=52 name=(null) inode=15880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=53 name=(null) inode=15885 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=55 name=(null) inode=15886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=56 name=(null) inode=15886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=57 name=(null) inode=15887 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=58 name=(null) inode=15886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=59 name=(null) inode=15888 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=60 name=(null) inode=15886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=61 name=(null) inode=15889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=62 name=(null) inode=15889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=63 name=(null) inode=15890 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=64 name=(null) inode=15889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=65 name=(null) inode=15891 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=66 name=(null) inode=15889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=67 name=(null) inode=15892 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=68 name=(null) inode=15889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=69 name=(null) inode=15893 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=70 name=(null) inode=15889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=71 name=(null) inode=15894 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=72 name=(null) inode=15886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=73 name=(null) inode=15895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=74 name=(null) inode=15895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=75 name=(null) inode=15896 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=76 name=(null) inode=15895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=77 name=(null) inode=15897 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=78 name=(null) inode=15895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=79 name=(null) inode=15898 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=80 name=(null) inode=15895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=81 name=(null) inode=15899 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=82 name=(null) inode=15895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=83 name=(null) inode=15900 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=84 name=(null) inode=15886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=85 name=(null) inode=15901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=86 name=(null) inode=15901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=87 name=(null) inode=15902 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=88 name=(null) inode=15901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=89 name=(null) inode=15903 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=90 name=(null) inode=15901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=91 name=(null) inode=15904 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=92 name=(null) inode=15901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=93 name=(null) inode=15905 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=94 name=(null) inode=15901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=95 name=(null) inode=15906 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=96 name=(null) inode=15886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=97 name=(null) inode=15907 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=98 name=(null) inode=15907 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=99 name=(null) inode=15908 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=100 name=(null) inode=15907 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=101 name=(null) inode=15909 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=102 name=(null) inode=15907 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=103 name=(null) inode=15910 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=104 name=(null) inode=15907 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=105 name=(null) inode=15911 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=106 name=(null) inode=15907 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=107 name=(null) inode=15912 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PATH item=109 name=(null) inode=15913 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:53:22.554000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 01:53:22.591004 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:53:22.598119 systemd-networkd[1092]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:53:22.600032 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:53:22.604473 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 01:53:22.608816 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:53:22.608923 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:53:22.609069 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:53:22.636667 kernel: kvm: Nested Virtualization enabled Dec 13 01:53:22.636725 kernel: SVM: kvm: Nested Paging enabled Dec 13 01:53:22.636754 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 01:53:22.636767 kernel: SVM: Virtual GIF supported Dec 13 01:53:22.654008 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:53:22.680428 systemd[1]: Finished systemd-udev-settle.service. Dec 13 01:53:22.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:22.682494 systemd[1]: Starting lvm2-activation-early.service... Dec 13 01:53:22.689662 lvm[1119]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:53:22.715209 systemd[1]: Finished lvm2-activation-early.service. Dec 13 01:53:22.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:22.716274 systemd[1]: Reached target cryptsetup.target. Dec 13 01:53:22.718134 systemd[1]: Starting lvm2-activation.service... Dec 13 01:53:22.721708 lvm[1121]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:53:22.746615 systemd[1]: Finished lvm2-activation.service. Dec 13 01:53:22.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:22.747536 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:53:22.748370 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:53:22.748386 systemd[1]: Reached target local-fs.target. Dec 13 01:53:22.749221 systemd[1]: Reached target machines.target. Dec 13 01:53:22.750906 systemd[1]: Starting ldconfig.service... Dec 13 01:53:22.751838 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:53:22.751917 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:53:22.753011 systemd[1]: Starting systemd-boot-update.service... Dec 13 01:53:22.754627 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 01:53:22.756748 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 01:53:22.758675 systemd[1]: Starting systemd-sysext.service... Dec 13 01:53:22.759753 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1124 (bootctl) Dec 13 01:53:22.760562 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 01:53:22.763194 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 01:53:22.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:22.771115 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 01:53:22.774418 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 01:53:22.774580 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 01:53:22.783998 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 01:53:22.791940 systemd-fsck[1132]: fsck.fat 4.2 (2021-01-31) Dec 13 01:53:22.791940 systemd-fsck[1132]: /dev/vda1: 790 files, 119311/258078 clusters Dec 13 01:53:22.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:22.792853 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 01:53:22.795761 systemd[1]: Mounting boot.mount... Dec 13 01:53:22.808805 systemd[1]: Mounted boot.mount. Dec 13 01:53:23.089000 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:53:23.090189 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:53:23.090968 systemd[1]: Finished systemd-boot-update.service. Dec 13 01:53:23.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:23.093562 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 01:53:23.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:23.105997 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 01:53:23.109390 (sd-sysext)[1145]: Using extensions 'kubernetes'. Dec 13 01:53:23.109673 (sd-sysext)[1145]: Merged extensions into '/usr'. Dec 13 01:53:23.125329 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:53:23.127010 systemd[1]: Mounting usr-share-oem.mount... Dec 13 01:53:23.128141 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:53:23.129117 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:53:23.130791 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:53:23.132635 systemd[1]: Starting modprobe@loop.service... Dec 13 01:53:23.133478 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:53:23.133589 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:53:23.133688 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:53:23.136167 systemd[1]: Mounted usr-share-oem.mount. Dec 13 01:53:23.137476 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:53:23.137606 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:53:23.138427 ldconfig[1123]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:53:23.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:23.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:23.138871 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:53:23.139008 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:53:23.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:23.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:23.140293 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:53:23.140421 systemd[1]: Finished modprobe@loop.service. Dec 13 01:53:23.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:23.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:23.141641 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:53:23.141728 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:53:23.143057 systemd[1]: Finished systemd-sysext.service. Dec 13 01:53:23.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:23.145221 systemd[1]: Starting ensure-sysext.service... Dec 13 01:53:23.146919 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 01:53:23.150097 systemd[1]: Finished ldconfig.service. Dec 13 01:53:23.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:23.152623 systemd[1]: Reloading. Dec 13 01:53:23.155216 systemd-tmpfiles[1160]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 01:53:23.156159 systemd-tmpfiles[1160]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:53:23.157472 systemd-tmpfiles[1160]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:53:23.193404 /usr/lib/systemd/system-generators/torcx-generator[1180]: time="2024-12-13T01:53:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:53:23.193430 /usr/lib/systemd/system-generators/torcx-generator[1180]: time="2024-12-13T01:53:23Z" level=info msg="torcx already run" Dec 13 01:53:23.269129 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:53:23.269147 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:53:23.287659 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:53:23.337083 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 01:53:23.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:23.340801 systemd[1]: Starting audit-rules.service... Dec 13 01:53:23.342581 systemd[1]: Starting clean-ca-certificates.service... Dec 13 01:53:23.344467 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 01:53:23.346850 systemd[1]: Starting systemd-resolved.service... Dec 13 01:53:23.348786 systemd[1]: Starting systemd-timesyncd.service... Dec 13 01:53:23.350770 systemd[1]: Starting systemd-update-utmp.service... Dec 13 01:53:23.352363 systemd[1]: Finished clean-ca-certificates.service. Dec 13 01:53:23.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:23.354000 audit[1241]: SYSTEM_BOOT pid=1241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 01:53:23.359859 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:53:23.360125 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:53:23.361486 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:53:23.364054 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:53:23.365746 systemd[1]: Starting modprobe@loop.service... Dec 13 01:53:23.366489 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:53:23.366583 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:53:23.366670 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:53:23.366730 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:53:23.367754 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 01:53:23.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:23.368000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 01:53:23.368000 audit[1258]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff19c058b0 a2=420 a3=0 items=0 ppid=1229 pid=1258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:23.368000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 01:53:23.370697 augenrules[1258]: No rules Dec 13 01:53:23.369754 systemd[1]: Finished systemd-update-utmp.service. Dec 13 01:53:23.371295 systemd[1]: Finished audit-rules.service. Dec 13 01:53:23.372687 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:53:23.372807 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:53:23.374246 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:53:23.374399 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:53:23.375654 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:53:23.375813 systemd[1]: Finished modprobe@loop.service. Dec 13 01:53:23.377818 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:53:23.377908 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:53:23.379190 systemd[1]: Starting systemd-update-done.service... Dec 13 01:53:23.382331 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:53:23.382518 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:53:23.383589 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:53:23.385967 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:53:23.388004 systemd[1]: Starting modprobe@loop.service... Dec 13 01:53:23.389333 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:53:23.389445 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:53:23.389536 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:53:23.389598 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:53:23.390467 systemd[1]: Finished systemd-update-done.service. Dec 13 01:53:23.393067 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:53:23.393221 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:53:23.394402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:53:23.394549 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:53:23.395800 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:53:23.395963 systemd[1]: Finished modprobe@loop.service. Dec 13 01:53:23.397150 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:53:23.397243 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:53:23.399691 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:53:23.399932 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:53:23.401173 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:53:23.403373 systemd[1]: Starting modprobe@drm.service... Dec 13 01:53:23.405131 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:53:23.406836 systemd[1]: Starting modprobe@loop.service... Dec 13 01:53:23.407712 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:53:23.407810 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:53:23.408862 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 01:53:23.409853 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:53:23.409958 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:53:23.414368 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:53:23.414519 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:53:23.415692 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:53:23.415813 systemd[1]: Finished modprobe@drm.service. Dec 13 01:53:23.416975 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:53:23.417136 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:53:23.418415 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:53:23.418588 systemd[1]: Finished modprobe@loop.service. Dec 13 01:53:23.419971 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:53:23.420093 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:53:23.421104 systemd[1]: Finished ensure-sysext.service. Dec 13 01:53:23.431154 systemd-resolved[1239]: Positive Trust Anchors: Dec 13 01:53:23.431167 systemd-resolved[1239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:53:23.431193 systemd-resolved[1239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:53:23.434821 systemd[1]: Started systemd-timesyncd.service. Dec 13 01:53:23.435898 systemd[1]: Reached target time-set.target. Dec 13 01:53:23.959319 systemd-timesyncd[1240]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:53:23.959358 systemd-timesyncd[1240]: Initial clock synchronization to Fri 2024-12-13 01:53:23.959248 UTC. Dec 13 01:53:23.962637 systemd-resolved[1239]: Defaulting to hostname 'linux'. Dec 13 01:53:23.963927 systemd[1]: Started systemd-resolved.service. Dec 13 01:53:23.964795 systemd[1]: Reached target network.target. Dec 13 01:53:23.965582 systemd[1]: Reached target nss-lookup.target. Dec 13 01:53:23.966415 systemd[1]: Reached target sysinit.target. Dec 13 01:53:23.967292 systemd[1]: Started motdgen.path. Dec 13 01:53:23.968005 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 01:53:23.969186 systemd[1]: Started logrotate.timer. Dec 13 01:53:23.969965 systemd[1]: Started mdadm.timer. Dec 13 01:53:23.970623 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 01:53:23.971454 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:53:23.971476 systemd[1]: Reached target paths.target. Dec 13 01:53:23.972204 systemd[1]: Reached target timers.target. Dec 13 01:53:23.973224 systemd[1]: Listening on dbus.socket. Dec 13 01:53:23.974976 systemd[1]: Starting docker.socket... Dec 13 01:53:23.976422 systemd[1]: Listening on sshd.socket. Dec 13 01:53:23.977245 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:53:23.977495 systemd[1]: Listening on docker.socket. Dec 13 01:53:23.978317 systemd[1]: Reached target sockets.target. Dec 13 01:53:23.979146 systemd[1]: Reached target basic.target. Dec 13 01:53:23.980020 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:53:23.980058 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:53:23.980076 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:53:23.980944 systemd[1]: Starting containerd.service... Dec 13 01:53:23.982662 systemd[1]: Starting dbus.service... Dec 13 01:53:23.987052 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 01:53:23.988847 systemd[1]: Starting extend-filesystems.service... Dec 13 01:53:23.989752 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 01:53:23.990772 systemd[1]: Starting motdgen.service... Dec 13 01:53:23.991698 jq[1292]: false Dec 13 01:53:23.993197 systemd[1]: Starting prepare-helm.service... Dec 13 01:53:23.994795 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 01:53:23.996633 systemd[1]: Starting sshd-keygen.service... Dec 13 01:53:23.999027 systemd[1]: Starting systemd-logind.service... Dec 13 01:53:23.999861 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:53:23.999921 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:53:24.000872 systemd[1]: Starting update-engine.service... Dec 13 01:53:24.004023 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 01:53:24.006632 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:53:24.006877 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 01:53:24.007576 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:53:24.008238 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 01:53:24.012382 jq[1311]: true Dec 13 01:53:24.018535 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:53:24.018772 systemd[1]: Finished motdgen.service. Dec 13 01:53:24.020304 tar[1315]: linux-amd64/helm Dec 13 01:53:24.023903 jq[1319]: true Dec 13 01:53:24.031389 dbus-daemon[1290]: [system] SELinux support is enabled Dec 13 01:53:24.031549 systemd[1]: Started dbus.service. Dec 13 01:53:24.034110 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:53:24.034140 systemd[1]: Reached target system-config.target. Dec 13 01:53:24.035081 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:53:24.035108 systemd[1]: Reached target user-config.target. Dec 13 01:53:24.038902 extend-filesystems[1293]: Found loop1 Dec 13 01:53:24.046196 extend-filesystems[1293]: Found sr0 Dec 13 01:53:24.047921 systemd-logind[1304]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:53:24.048146 systemd-logind[1304]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:53:24.048341 extend-filesystems[1293]: Found vda Dec 13 01:53:24.051152 extend-filesystems[1293]: Found vda1 Dec 13 01:53:24.051152 extend-filesystems[1293]: Found vda2 Dec 13 01:53:24.051152 extend-filesystems[1293]: Found vda3 Dec 13 01:53:24.051152 extend-filesystems[1293]: Found usr Dec 13 01:53:24.051152 extend-filesystems[1293]: Found vda4 Dec 13 01:53:24.051152 extend-filesystems[1293]: Found vda6 Dec 13 01:53:24.051152 extend-filesystems[1293]: Found vda7 Dec 13 01:53:24.051152 extend-filesystems[1293]: Found vda9 Dec 13 01:53:24.051152 extend-filesystems[1293]: Checking size of /dev/vda9 Dec 13 01:53:24.052751 systemd-logind[1304]: New seat seat0. Dec 13 01:53:24.066025 bash[1354]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:53:24.066087 env[1318]: time="2024-12-13T01:53:24.055813032Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 01:53:24.062751 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 01:53:24.064581 systemd[1]: Started systemd-logind.service. Dec 13 01:53:24.073294 update_engine[1307]: I1213 01:53:24.072692 1307 main.cc:92] Flatcar Update Engine starting Dec 13 01:53:24.082963 systemd[1]: Started update-engine.service. Dec 13 01:53:24.085941 update_engine[1307]: I1213 01:53:24.083959 1307 update_check_scheduler.cc:74] Next update check in 2m10s Dec 13 01:53:24.085527 systemd[1]: Started locksmithd.service. Dec 13 01:53:24.088828 extend-filesystems[1293]: Resized partition /dev/vda9 Dec 13 01:53:24.091550 extend-filesystems[1365]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 01:53:24.102312 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:53:24.115029 env[1318]: time="2024-12-13T01:53:24.114781390Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:53:24.115029 env[1318]: time="2024-12-13T01:53:24.114896105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:24.116082 env[1318]: time="2024-12-13T01:53:24.116047445Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:24.116313 env[1318]: time="2024-12-13T01:53:24.116294368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:24.116597 env[1318]: time="2024-12-13T01:53:24.116579302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:24.116709 env[1318]: time="2024-12-13T01:53:24.116684349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:24.116831 env[1318]: time="2024-12-13T01:53:24.116806248Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 01:53:24.116929 env[1318]: time="2024-12-13T01:53:24.116904512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:24.117081 env[1318]: time="2024-12-13T01:53:24.117065123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:24.117376 env[1318]: time="2024-12-13T01:53:24.117359746Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:24.117800 env[1318]: time="2024-12-13T01:53:24.117763964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:24.117927 env[1318]: time="2024-12-13T01:53:24.117879962Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:53:24.119289 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:53:24.138242 env[1318]: time="2024-12-13T01:53:24.137840061Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 01:53:24.138242 env[1318]: time="2024-12-13T01:53:24.137880577Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:53:24.138340 extend-filesystems[1365]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:53:24.138340 extend-filesystems[1365]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:53:24.138340 extend-filesystems[1365]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:53:24.143333 extend-filesystems[1293]: Resized filesystem in /dev/vda9 Dec 13 01:53:24.138514 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:53:24.138754 systemd[1]: Finished extend-filesystems.service. Dec 13 01:53:24.146655 env[1318]: time="2024-12-13T01:53:24.146618627Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:53:24.146706 env[1318]: time="2024-12-13T01:53:24.146673480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:53:24.146706 env[1318]: time="2024-12-13T01:53:24.146686915Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:53:24.146793 env[1318]: time="2024-12-13T01:53:24.146770753Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:53:24.146793 env[1318]: time="2024-12-13T01:53:24.146792073Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:53:24.146853 env[1318]: time="2024-12-13T01:53:24.146817060Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:53:24.146853 env[1318]: time="2024-12-13T01:53:24.146830465Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:53:24.146853 env[1318]: time="2024-12-13T01:53:24.146843850Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:53:24.146915 env[1318]: time="2024-12-13T01:53:24.146856443Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 01:53:24.146915 env[1318]: time="2024-12-13T01:53:24.146869839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:53:24.146915 env[1318]: time="2024-12-13T01:53:24.146893142Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:53:24.146915 env[1318]: time="2024-12-13T01:53:24.146905686Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:53:24.147046 env[1318]: time="2024-12-13T01:53:24.147019650Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:53:24.147142 env[1318]: time="2024-12-13T01:53:24.147104899Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:53:24.147557 env[1318]: time="2024-12-13T01:53:24.147521932Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:53:24.147599 env[1318]: time="2024-12-13T01:53:24.147549564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:53:24.147599 env[1318]: time="2024-12-13T01:53:24.147574049Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:53:24.147640 env[1318]: time="2024-12-13T01:53:24.147616249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:53:24.147708 env[1318]: time="2024-12-13T01:53:24.147676522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:53:24.147708 env[1318]: time="2024-12-13T01:53:24.147693103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:53:24.147784 env[1318]: time="2024-12-13T01:53:24.147720664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:53:24.147784 env[1318]: time="2024-12-13T01:53:24.147733328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:53:24.147784 env[1318]: time="2024-12-13T01:53:24.147744529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:53:24.147784 env[1318]: time="2024-12-13T01:53:24.147755460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:53:24.147784 env[1318]: time="2024-12-13T01:53:24.147765138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:53:24.147884 env[1318]: time="2024-12-13T01:53:24.147777431Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:53:24.147928 env[1318]: time="2024-12-13T01:53:24.147904189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:53:24.147928 env[1318]: time="2024-12-13T01:53:24.147924677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:53:24.147988 env[1318]: time="2024-12-13T01:53:24.147948963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:53:24.147988 env[1318]: time="2024-12-13T01:53:24.147960885Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:53:24.147988 env[1318]: time="2024-12-13T01:53:24.147975162Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 01:53:24.147988 env[1318]: time="2024-12-13T01:53:24.147985511Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:53:24.148068 env[1318]: time="2024-12-13T01:53:24.148005358Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 01:53:24.148068 env[1318]: time="2024-12-13T01:53:24.148050473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:53:24.148377 env[1318]: time="2024-12-13T01:53:24.148296915Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:53:24.148377 env[1318]: time="2024-12-13T01:53:24.148362568Z" level=info msg="Connect containerd service" Dec 13 01:53:24.149048 env[1318]: time="2024-12-13T01:53:24.148385872Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:53:24.149048 env[1318]: time="2024-12-13T01:53:24.148949690Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:53:24.149343 env[1318]: time="2024-12-13T01:53:24.149077239Z" level=info msg="Start subscribing containerd event" Dec 13 01:53:24.149343 env[1318]: time="2024-12-13T01:53:24.149120109Z" level=info msg="Start recovering state" Dec 13 01:53:24.149343 env[1318]: time="2024-12-13T01:53:24.149161417Z" level=info msg="Start event monitor" Dec 13 01:53:24.149343 env[1318]: time="2024-12-13T01:53:24.149175984Z" level=info msg="Start snapshots syncer" Dec 13 01:53:24.149343 env[1318]: time="2024-12-13T01:53:24.149182797Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:53:24.149343 env[1318]: time="2024-12-13T01:53:24.149189319Z" level=info msg="Start streaming server" Dec 13 01:53:24.149482 env[1318]: time="2024-12-13T01:53:24.149441693Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:53:24.149506 env[1318]: time="2024-12-13T01:53:24.149491646Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:53:24.149639 systemd[1]: Started containerd.service. Dec 13 01:53:24.151206 locksmithd[1364]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:53:24.153401 env[1318]: time="2024-12-13T01:53:24.153373077Z" level=info msg="containerd successfully booted in 0.098191s" Dec 13 01:53:24.329473 sshd_keygen[1321]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:53:24.347089 systemd[1]: Finished sshd-keygen.service. Dec 13 01:53:24.349622 systemd[1]: Starting issuegen.service... Dec 13 01:53:24.354848 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:53:24.355064 systemd[1]: Finished issuegen.service. Dec 13 01:53:24.357328 systemd[1]: Starting systemd-user-sessions.service... Dec 13 01:53:24.361560 systemd[1]: Finished systemd-user-sessions.service. Dec 13 01:53:24.363662 systemd[1]: Started getty@tty1.service. Dec 13 01:53:24.365411 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 01:53:24.366504 systemd[1]: Reached target getty.target. Dec 13 01:53:24.421885 tar[1315]: linux-amd64/LICENSE Dec 13 01:53:24.421945 tar[1315]: linux-amd64/README.md Dec 13 01:53:24.425966 systemd[1]: Finished prepare-helm.service. Dec 13 01:53:24.933457 systemd-networkd[1092]: eth0: Gained IPv6LL Dec 13 01:53:24.935401 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 01:53:24.936902 systemd[1]: Reached target network-online.target. Dec 13 01:53:24.939868 systemd[1]: Starting kubelet.service... Dec 13 01:53:25.478400 systemd[1]: Started kubelet.service. Dec 13 01:53:25.479608 systemd[1]: Reached target multi-user.target. Dec 13 01:53:25.481590 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 01:53:25.486754 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 01:53:25.486926 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 01:53:25.489147 systemd[1]: Startup finished in 5.073s (kernel) + 5.788s (userspace) = 10.861s. Dec 13 01:53:25.927698 kubelet[1403]: E1213 01:53:25.927550 1403 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:53:25.929258 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:53:25.929409 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:53:33.442260 systemd[1]: Created slice system-sshd.slice. Dec 13 01:53:33.443397 systemd[1]: Started sshd@0-10.0.0.88:22-10.0.0.1:43170.service. Dec 13 01:53:33.480464 sshd[1414]: Accepted publickey for core from 10.0.0.1 port 43170 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:53:33.481946 sshd[1414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:33.488981 systemd[1]: Created slice user-500.slice. Dec 13 01:53:33.489958 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 01:53:33.491708 systemd-logind[1304]: New session 1 of user core. Dec 13 01:53:33.498850 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 01:53:33.500121 systemd[1]: Starting user@500.service... Dec 13 01:53:33.503102 (systemd)[1419]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:33.572036 systemd[1419]: Queued start job for default target default.target. Dec 13 01:53:33.572216 systemd[1419]: Reached target paths.target. Dec 13 01:53:33.572230 systemd[1419]: Reached target sockets.target. Dec 13 01:53:33.572241 systemd[1419]: Reached target timers.target. Dec 13 01:53:33.572251 systemd[1419]: Reached target basic.target. Dec 13 01:53:33.572293 systemd[1419]: Reached target default.target. Dec 13 01:53:33.572311 systemd[1419]: Startup finished in 63ms. Dec 13 01:53:33.572407 systemd[1]: Started user@500.service. Dec 13 01:53:33.573247 systemd[1]: Started session-1.scope. Dec 13 01:53:33.623536 systemd[1]: Started sshd@1-10.0.0.88:22-10.0.0.1:43174.service. Dec 13 01:53:33.657203 sshd[1428]: Accepted publickey for core from 10.0.0.1 port 43174 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:53:33.658342 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:33.662024 systemd-logind[1304]: New session 2 of user core. Dec 13 01:53:33.662718 systemd[1]: Started session-2.scope. Dec 13 01:53:33.717229 sshd[1428]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:33.719540 systemd[1]: Started sshd@2-10.0.0.88:22-10.0.0.1:43188.service. Dec 13 01:53:33.719962 systemd[1]: sshd@1-10.0.0.88:22-10.0.0.1:43174.service: Deactivated successfully. Dec 13 01:53:33.720706 systemd-logind[1304]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:53:33.720758 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:53:33.721566 systemd-logind[1304]: Removed session 2. Dec 13 01:53:33.751235 sshd[1433]: Accepted publickey for core from 10.0.0.1 port 43188 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:53:33.752234 sshd[1433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:33.755283 systemd-logind[1304]: New session 3 of user core. Dec 13 01:53:33.755980 systemd[1]: Started session-3.scope. Dec 13 01:53:33.803820 sshd[1433]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:33.805856 systemd[1]: Started sshd@3-10.0.0.88:22-10.0.0.1:43204.service. Dec 13 01:53:33.806715 systemd[1]: sshd@2-10.0.0.88:22-10.0.0.1:43188.service: Deactivated successfully. Dec 13 01:53:33.807362 systemd-logind[1304]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:53:33.807421 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:53:33.808146 systemd-logind[1304]: Removed session 3. Dec 13 01:53:33.839462 sshd[1440]: Accepted publickey for core from 10.0.0.1 port 43204 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:53:33.840420 sshd[1440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:33.843256 systemd-logind[1304]: New session 4 of user core. Dec 13 01:53:33.843864 systemd[1]: Started session-4.scope. Dec 13 01:53:33.896390 sshd[1440]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:33.898298 systemd[1]: Started sshd@4-10.0.0.88:22-10.0.0.1:43208.service. Dec 13 01:53:33.898665 systemd[1]: sshd@3-10.0.0.88:22-10.0.0.1:43204.service: Deactivated successfully. Dec 13 01:53:33.899460 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:53:33.899509 systemd-logind[1304]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:53:33.900341 systemd-logind[1304]: Removed session 4. Dec 13 01:53:33.930698 sshd[1447]: Accepted publickey for core from 10.0.0.1 port 43208 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:53:33.931701 sshd[1447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:33.934659 systemd-logind[1304]: New session 5 of user core. Dec 13 01:53:33.935295 systemd[1]: Started session-5.scope. Dec 13 01:53:33.989991 sudo[1453]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:53:33.990175 sudo[1453]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 01:53:33.997511 dbus-daemon[1290]: \xd0M\xa0\xe6\u0006V: received setenforce notice (enforcing=-1800072848) Dec 13 01:53:33.999516 sudo[1453]: pam_unix(sudo:session): session closed for user root Dec 13 01:53:34.001177 sshd[1447]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:34.003414 systemd[1]: Started sshd@5-10.0.0.88:22-10.0.0.1:43218.service. Dec 13 01:53:34.004019 systemd[1]: sshd@4-10.0.0.88:22-10.0.0.1:43208.service: Deactivated successfully. Dec 13 01:53:34.004913 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:53:34.005397 systemd-logind[1304]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:53:34.006070 systemd-logind[1304]: Removed session 5. Dec 13 01:53:34.036063 sshd[1455]: Accepted publickey for core from 10.0.0.1 port 43218 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:53:34.037087 sshd[1455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:34.040012 systemd-logind[1304]: New session 6 of user core. Dec 13 01:53:34.040608 systemd[1]: Started session-6.scope. Dec 13 01:53:34.092617 sudo[1462]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:53:34.092807 sudo[1462]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 01:53:34.095486 sudo[1462]: pam_unix(sudo:session): session closed for user root Dec 13 01:53:34.099902 sudo[1461]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:53:34.100090 sudo[1461]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 01:53:34.107909 systemd[1]: Stopping audit-rules.service... Dec 13 01:53:34.108000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 01:53:34.109252 auditctl[1465]: No rules Dec 13 01:53:34.110147 kernel: kauditd_printk_skb: 169 callbacks suppressed Dec 13 01:53:34.110174 kernel: audit: type=1305 audit(1734054814.108:135): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 01:53:34.108000 audit[1465]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcf3e5f2c0 a2=420 a3=0 items=0 ppid=1 pid=1465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:34.116381 kernel: audit: type=1300 audit(1734054814.108:135): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcf3e5f2c0 a2=420 a3=0 items=0 ppid=1 pid=1465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:34.116404 kernel: audit: type=1327 audit(1734054814.108:135): proctitle=2F7362696E2F617564697463746C002D44 Dec 13 01:53:34.108000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Dec 13 01:53:34.116694 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:53:34.116930 systemd[1]: Stopped audit-rules.service. Dec 13 01:53:34.117687 kernel: audit: type=1131 audit(1734054814.116:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:34.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:34.118407 systemd[1]: Starting audit-rules.service... Dec 13 01:53:34.133151 augenrules[1483]: No rules Dec 13 01:53:34.133911 systemd[1]: Finished audit-rules.service. Dec 13 01:53:34.134743 sudo[1461]: pam_unix(sudo:session): session closed for user root Dec 13 01:53:34.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:34.136062 sshd[1455]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:34.134000 audit[1461]: USER_END pid=1461 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:53:34.138901 systemd[1]: sshd@5-10.0.0.88:22-10.0.0.1:43218.service: Deactivated successfully. Dec 13 01:53:34.139762 systemd-logind[1304]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:53:34.141833 kernel: audit: type=1130 audit(1734054814.133:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:34.141873 kernel: audit: type=1106 audit(1734054814.134:138): pid=1461 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:53:34.141889 kernel: audit: type=1104 audit(1734054814.134:139): pid=1461 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:53:34.134000 audit[1461]: CRED_DISP pid=1461 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:53:34.142051 systemd[1]: Started sshd@6-10.0.0.88:22-10.0.0.1:43232.service. Dec 13 01:53:34.142322 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:53:34.143145 systemd-logind[1304]: Removed session 6. Dec 13 01:53:34.136000 audit[1455]: USER_END pid=1455 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:53:34.149263 kernel: audit: type=1106 audit(1734054814.136:140): pid=1455 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:53:34.149364 kernel: audit: type=1104 audit(1734054814.136:141): pid=1455 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:53:34.136000 audit[1455]: CRED_DISP pid=1455 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:53:34.152638 kernel: audit: type=1131 audit(1734054814.138:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.88:22-10.0.0.1:43218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:34.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.88:22-10.0.0.1:43218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:34.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.88:22-10.0.0.1:43232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:34.180000 audit[1490]: USER_ACCT pid=1490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:53:34.180533 sshd[1490]: Accepted publickey for core from 10.0.0.1 port 43232 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:53:34.180000 audit[1490]: CRED_ACQ pid=1490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:53:34.180000 audit[1490]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd7ad5b2c0 a2=3 a3=0 items=0 ppid=1 pid=1490 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:34.180000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:53:34.181350 sshd[1490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:34.184642 systemd-logind[1304]: New session 7 of user core. Dec 13 01:53:34.185343 systemd[1]: Started session-7.scope. Dec 13 01:53:34.188000 audit[1490]: USER_START pid=1490 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:53:34.189000 audit[1493]: CRED_ACQ pid=1493 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:53:34.235000 audit[1494]: USER_ACCT pid=1494 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:53:34.236000 audit[1494]: CRED_REFR pid=1494 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:53:34.236322 sudo[1494]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:53:34.236510 sudo[1494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 01:53:34.237000 audit[1494]: USER_START pid=1494 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:53:34.256158 systemd[1]: Starting docker.service... Dec 13 01:53:34.292785 env[1506]: time="2024-12-13T01:53:34.292732166Z" level=info msg="Starting up" Dec 13 01:53:34.294152 env[1506]: time="2024-12-13T01:53:34.294107576Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 01:53:34.294152 env[1506]: time="2024-12-13T01:53:34.294132613Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 01:53:34.294152 env[1506]: time="2024-12-13T01:53:34.294157570Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 01:53:34.294373 env[1506]: time="2024-12-13T01:53:34.294167879Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 01:53:34.295830 env[1506]: time="2024-12-13T01:53:34.295799460Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 01:53:34.295830 env[1506]: time="2024-12-13T01:53:34.295817534Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 01:53:34.295907 env[1506]: time="2024-12-13T01:53:34.295833013Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 01:53:34.295907 env[1506]: time="2024-12-13T01:53:34.295843342Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 01:53:34.848409 env[1506]: time="2024-12-13T01:53:34.848358482Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 01:53:34.848409 env[1506]: time="2024-12-13T01:53:34.848389531Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 01:53:34.848675 env[1506]: time="2024-12-13T01:53:34.848620514Z" level=info msg="Loading containers: start." Dec 13 01:53:34.897000 audit[1540]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:34.897000 audit[1540]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe5b73dd30 a2=0 a3=7ffe5b73dd1c items=0 ppid=1506 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:34.897000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 01:53:34.899000 audit[1542]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:34.899000 audit[1542]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc41999d00 a2=0 a3=7ffc41999cec items=0 ppid=1506 pid=1542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:34.899000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 01:53:34.900000 audit[1544]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:34.900000 audit[1544]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc1632ed30 a2=0 a3=7ffc1632ed1c items=0 ppid=1506 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:34.900000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 01:53:34.902000 audit[1546]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:34.902000 audit[1546]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe7eebc750 a2=0 a3=7ffe7eebc73c items=0 ppid=1506 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:34.902000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 01:53:34.904000 audit[1548]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:34.904000 audit[1548]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff1393c100 a2=0 a3=7fff1393c0ec items=0 ppid=1506 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:34.904000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Dec 13 01:53:34.923000 audit[1553]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:34.923000 audit[1553]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc8a6f0a50 a2=0 a3=7ffc8a6f0a3c items=0 ppid=1506 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:34.923000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Dec 13 01:53:34.931000 audit[1555]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:34.931000 audit[1555]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc7e8b8c60 a2=0 a3=7ffc7e8b8c4c items=0 ppid=1506 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:34.931000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 01:53:34.933000 audit[1557]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1557 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:34.933000 audit[1557]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc0681a760 a2=0 a3=7ffc0681a74c items=0 ppid=1506 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:34.933000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 01:53:34.935000 audit[1559]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:34.935000 audit[1559]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffde779f090 a2=0 a3=7ffde779f07c items=0 ppid=1506 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:34.935000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 01:53:34.943000 audit[1563]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:34.943000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffbfbaccb0 a2=0 a3=7fffbfbacc9c items=0 ppid=1506 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:34.943000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 01:53:34.948000 audit[1564]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:34.948000 audit[1564]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc8f9da0a0 a2=0 a3=7ffc8f9da08c items=0 ppid=1506 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:34.948000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 01:53:34.957300 kernel: Initializing XFRM netlink socket Dec 13 01:53:34.985828 env[1506]: time="2024-12-13T01:53:34.985782781Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 01:53:35.000000 audit[1572]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:35.000000 audit[1572]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe96cdbc60 a2=0 a3=7ffe96cdbc4c items=0 ppid=1506 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:35.000000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 01:53:35.011000 audit[1575]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:35.011000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffec8118110 a2=0 a3=7ffec81180fc items=0 ppid=1506 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:35.011000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 01:53:35.014000 audit[1578]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:35.014000 audit[1578]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff3493ad90 a2=0 a3=7fff3493ad7c items=0 ppid=1506 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:35.014000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Dec 13 01:53:35.015000 audit[1580]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:35.015000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe44ede9d0 a2=0 a3=7ffe44ede9bc items=0 ppid=1506 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:35.015000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Dec 13 01:53:35.017000 audit[1582]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:35.017000 audit[1582]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fff4cf11420 a2=0 a3=7fff4cf1140c items=0 ppid=1506 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:35.017000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 01:53:35.019000 audit[1584]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:35.019000 audit[1584]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd2057ae30 a2=0 a3=7ffd2057ae1c items=0 ppid=1506 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:35.019000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 01:53:35.021000 audit[1586]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:35.021000 audit[1586]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fff2ce186b0 a2=0 a3=7fff2ce1869c items=0 ppid=1506 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:35.021000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Dec 13 01:53:35.027000 audit[1589]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:35.027000 audit[1589]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffccd340790 a2=0 a3=7ffccd34077c items=0 ppid=1506 pid=1589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:35.027000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 01:53:35.029000 audit[1591]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:35.029000 audit[1591]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffdf4da3360 a2=0 a3=7ffdf4da334c items=0 ppid=1506 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:35.029000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 01:53:35.031000 audit[1593]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:35.031000 audit[1593]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe426dc930 a2=0 a3=7ffe426dc91c items=0 ppid=1506 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:35.031000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 01:53:35.033000 audit[1595]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:35.033000 audit[1595]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffdb2a5330 a2=0 a3=7fffdb2a531c items=0 ppid=1506 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:35.033000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 01:53:35.034460 systemd-networkd[1092]: docker0: Link UP Dec 13 01:53:35.044000 audit[1599]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:35.044000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff0bc56b60 a2=0 a3=7fff0bc56b4c items=0 ppid=1506 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:35.044000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 01:53:35.050000 audit[1600]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1600 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:35.050000 audit[1600]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe59bdb460 a2=0 a3=7ffe59bdb44c items=0 ppid=1506 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:35.050000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 01:53:35.050960 env[1506]: time="2024-12-13T01:53:35.050907807Z" level=info msg="Loading containers: done." Dec 13 01:53:35.066811 env[1506]: time="2024-12-13T01:53:35.066749411Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:53:35.067018 env[1506]: time="2024-12-13T01:53:35.066922546Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 01:53:35.067018 env[1506]: time="2024-12-13T01:53:35.067007866Z" level=info msg="Daemon has completed initialization" Dec 13 01:53:35.083365 systemd[1]: Started docker.service. Dec 13 01:53:35.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:35.090899 env[1506]: time="2024-12-13T01:53:35.090841722Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:53:35.861953 env[1318]: time="2024-12-13T01:53:35.861897913Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:53:36.180388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:53:36.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:36.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:36.180575 systemd[1]: Stopped kubelet.service. Dec 13 01:53:36.181984 systemd[1]: Starting kubelet.service... Dec 13 01:53:36.265070 systemd[1]: Started kubelet.service. Dec 13 01:53:36.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:36.539079 kubelet[1651]: E1213 01:53:36.538941 1651 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:53:36.541991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:53:36.542167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:53:36.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 01:53:36.935262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1210300061.mount: Deactivated successfully. Dec 13 01:53:39.157749 env[1318]: time="2024-12-13T01:53:39.157692571Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:39.159953 env[1318]: time="2024-12-13T01:53:39.159901575Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:39.161777 env[1318]: time="2024-12-13T01:53:39.161752767Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:39.163429 env[1318]: time="2024-12-13T01:53:39.163390810Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:39.164053 env[1318]: time="2024-12-13T01:53:39.164015061Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:53:39.175829 env[1318]: time="2024-12-13T01:53:39.175791921Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:53:42.198618 env[1318]: time="2024-12-13T01:53:42.198563716Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:42.201027 env[1318]: time="2024-12-13T01:53:42.200989717Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:42.203242 env[1318]: time="2024-12-13T01:53:42.203178492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:42.205044 env[1318]: time="2024-12-13T01:53:42.204999117Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:42.205610 env[1318]: time="2024-12-13T01:53:42.205572753Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:53:42.216013 env[1318]: time="2024-12-13T01:53:42.215975566Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:53:43.820497 env[1318]: time="2024-12-13T01:53:43.820427239Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:43.823927 env[1318]: time="2024-12-13T01:53:43.823890785Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:43.825617 env[1318]: time="2024-12-13T01:53:43.825590554Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:43.827257 env[1318]: time="2024-12-13T01:53:43.827230450Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:43.828121 env[1318]: time="2024-12-13T01:53:43.828080825Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:53:43.838651 env[1318]: time="2024-12-13T01:53:43.838629421Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:53:45.150327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2216703498.mount: Deactivated successfully. Dec 13 01:53:46.268319 env[1318]: time="2024-12-13T01:53:46.268243232Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:46.270259 env[1318]: time="2024-12-13T01:53:46.270217275Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:46.271566 env[1318]: time="2024-12-13T01:53:46.271528715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:46.272791 env[1318]: time="2024-12-13T01:53:46.272758511Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:46.273162 env[1318]: time="2024-12-13T01:53:46.273137342Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:53:46.289523 env[1318]: time="2024-12-13T01:53:46.289489454Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:53:46.793050 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:53:46.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:46.793233 systemd[1]: Stopped kubelet.service. Dec 13 01:53:46.794591 systemd[1]: Starting kubelet.service... Dec 13 01:53:46.797682 kernel: kauditd_printk_skb: 88 callbacks suppressed Dec 13 01:53:46.797762 kernel: audit: type=1130 audit(1734054826.792:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:46.797792 kernel: audit: type=1131 audit(1734054826.793:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:46.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:46.880672 systemd[1]: Started kubelet.service. Dec 13 01:53:46.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:46.884286 kernel: audit: type=1130 audit(1734054826.880:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:46.956439 kubelet[1694]: E1213 01:53:46.956385 1694 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:53:46.958388 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:53:46.958521 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:53:46.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 01:53:46.962294 kernel: audit: type=1131 audit(1734054826.958:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 01:53:47.230116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2061533945.mount: Deactivated successfully. Dec 13 01:53:48.240979 env[1318]: time="2024-12-13T01:53:48.240929708Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:48.242624 env[1318]: time="2024-12-13T01:53:48.242576056Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:48.244209 env[1318]: time="2024-12-13T01:53:48.244183501Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:48.245804 env[1318]: time="2024-12-13T01:53:48.245786157Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:48.246443 env[1318]: time="2024-12-13T01:53:48.246423513Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:53:48.256185 env[1318]: time="2024-12-13T01:53:48.256159705Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:53:48.805872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2784364215.mount: Deactivated successfully. Dec 13 01:53:48.810832 env[1318]: time="2024-12-13T01:53:48.810799070Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:48.812488 env[1318]: time="2024-12-13T01:53:48.812468301Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:48.813810 env[1318]: time="2024-12-13T01:53:48.813765885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:48.815101 env[1318]: time="2024-12-13T01:53:48.815078527Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:48.815509 env[1318]: time="2024-12-13T01:53:48.815490179Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:53:48.827249 env[1318]: time="2024-12-13T01:53:48.827227164Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:53:49.435440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1355659477.mount: Deactivated successfully. Dec 13 01:53:52.404540 env[1318]: time="2024-12-13T01:53:52.404490278Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:52.406321 env[1318]: time="2024-12-13T01:53:52.406299000Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:52.409461 env[1318]: time="2024-12-13T01:53:52.409421277Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:52.411062 env[1318]: time="2024-12-13T01:53:52.411034663Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:52.411755 env[1318]: time="2024-12-13T01:53:52.411710491Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:53:54.346588 systemd[1]: Stopped kubelet.service. Dec 13 01:53:54.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:54.348989 systemd[1]: Starting kubelet.service... Dec 13 01:53:54.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:54.353615 kernel: audit: type=1130 audit(1734054834.346:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:54.353675 kernel: audit: type=1131 audit(1734054834.346:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:54.362902 systemd[1]: Reloading. Dec 13 01:53:54.423228 /usr/lib/systemd/system-generators/torcx-generator[1816]: time="2024-12-13T01:53:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:53:54.423569 /usr/lib/systemd/system-generators/torcx-generator[1816]: time="2024-12-13T01:53:54Z" level=info msg="torcx already run" Dec 13 01:53:54.670008 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:53:54.670025 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:53:54.689673 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:53:54.755977 systemd[1]: Started kubelet.service. Dec 13 01:53:54.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:54.758693 systemd[1]: Stopping kubelet.service... Dec 13 01:53:54.759016 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:53:54.759343 systemd[1]: Stopped kubelet.service. Dec 13 01:53:54.762757 kernel: audit: type=1130 audit(1734054834.755:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:54.762851 kernel: audit: type=1131 audit(1734054834.755:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:54.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:54.762990 systemd[1]: Starting kubelet.service... Dec 13 01:53:54.834479 systemd[1]: Started kubelet.service. Dec 13 01:53:54.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:54.839340 kernel: audit: type=1130 audit(1734054834.835:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:53:54.876473 kubelet[1875]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:53:54.876473 kubelet[1875]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:53:54.876473 kubelet[1875]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:53:54.877380 kubelet[1875]: I1213 01:53:54.877339 1875 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:53:55.178654 kubelet[1875]: I1213 01:53:55.178620 1875 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:53:55.178654 kubelet[1875]: I1213 01:53:55.178651 1875 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:53:55.178918 kubelet[1875]: I1213 01:53:55.178897 1875 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:53:55.194838 kubelet[1875]: I1213 01:53:55.194799 1875 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:53:55.195171 kubelet[1875]: E1213 01:53:55.195150 1875 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.88:6443: connect: connection refused Dec 13 01:53:55.203148 kubelet[1875]: I1213 01:53:55.203125 1875 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:53:55.204248 kubelet[1875]: I1213 01:53:55.204227 1875 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:53:55.204418 kubelet[1875]: I1213 01:53:55.204398 1875 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:53:55.204512 kubelet[1875]: I1213 01:53:55.204422 1875 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:53:55.204512 kubelet[1875]: I1213 01:53:55.204430 1875 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:53:55.204512 kubelet[1875]: I1213 01:53:55.204509 1875 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:53:55.204595 kubelet[1875]: I1213 01:53:55.204584 1875 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:53:55.204622 kubelet[1875]: I1213 01:53:55.204599 1875 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:53:55.204622 kubelet[1875]: I1213 01:53:55.204622 1875 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:53:55.204664 kubelet[1875]: I1213 01:53:55.204634 1875 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:53:55.207932 kubelet[1875]: W1213 01:53:55.207872 1875 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Dec 13 01:53:55.207994 kubelet[1875]: E1213 01:53:55.207939 1875 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Dec 13 01:53:55.208119 kubelet[1875]: I1213 01:53:55.208103 1875 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 01:53:55.208219 kubelet[1875]: W1213 01:53:55.208189 1875 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Dec 13 01:53:55.208315 kubelet[1875]: E1213 01:53:55.208301 1875 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Dec 13 01:53:55.210435 kubelet[1875]: I1213 01:53:55.210416 1875 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:53:55.210487 kubelet[1875]: W1213 01:53:55.210471 1875 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:53:55.212442 kubelet[1875]: I1213 01:53:55.211867 1875 server.go:1256] "Started kubelet" Dec 13 01:53:55.212442 kubelet[1875]: I1213 01:53:55.211967 1875 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:53:55.212442 kubelet[1875]: I1213 01:53:55.212155 1875 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:53:55.212442 kubelet[1875]: I1213 01:53:55.212401 1875 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:53:55.212751 kubelet[1875]: I1213 01:53:55.212727 1875 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:53:55.213000 audit[1875]: AVC avc: denied { mac_admin } for pid=1875 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:53:55.216365 kubelet[1875]: I1213 01:53:55.213637 1875 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 01:53:55.216365 kubelet[1875]: I1213 01:53:55.213666 1875 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 01:53:55.216365 kubelet[1875]: I1213 01:53:55.213768 1875 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:53:55.213000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:53:55.225285 kernel: audit: type=1400 audit(1734054835.213:190): avc: denied { mac_admin } for pid=1875 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:53:55.225336 kernel: audit: type=1401 audit(1734054835.213:190): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:53:55.225355 kernel: audit: type=1300 audit(1734054835.213:190): arch=c000003e syscall=188 success=no exit=-22 a0=c000819680 a1=c0008f6060 a2=c000819650 a3=25 items=0 ppid=1 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:55.225375 kernel: audit: type=1327 audit(1734054835.213:190): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:53:55.213000 audit[1875]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000819680 a1=c0008f6060 a2=c000819650 a3=25 items=0 ppid=1 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:55.213000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:53:55.225484 kubelet[1875]: I1213 01:53:55.218758 1875 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:53:55.225484 kubelet[1875]: I1213 01:53:55.218832 1875 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:53:55.225484 kubelet[1875]: I1213 01:53:55.218884 1875 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:53:55.225484 kubelet[1875]: W1213 01:53:55.219147 1875 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Dec 13 01:53:55.225484 kubelet[1875]: E1213 01:53:55.219181 1875 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Dec 13 01:53:55.225484 kubelet[1875]: I1213 01:53:55.219842 1875 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:53:55.225484 kubelet[1875]: I1213 01:53:55.219928 1875 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:53:55.225484 kubelet[1875]: I1213 01:53:55.220572 1875 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:53:55.225484 kubelet[1875]: E1213 01:53:55.222100 1875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="200ms" Dec 13 01:53:55.225484 kubelet[1875]: E1213 01:53:55.222183 1875 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:53:55.225739 kubelet[1875]: E1213 01:53:55.222759 1875 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.88:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.88:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181099af69d99509 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:53:55.211834633 +0000 UTC m=+0.373701749,LastTimestamp:2024-12-13 01:53:55.211834633 +0000 UTC m=+0.373701749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:53:55.228104 kernel: audit: type=1400 audit(1734054835.213:191): avc: denied { mac_admin } for pid=1875 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:53:55.213000 audit[1875]: AVC avc: denied { mac_admin } for pid=1875 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:53:55.213000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:53:55.213000 audit[1875]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0007018a0 a1=c0008f6078 a2=c000819710 a3=25 items=0 ppid=1 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:55.213000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:53:55.215000 audit[1887]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1887 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:55.215000 audit[1887]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd2d6ee1c0 a2=0 a3=7ffd2d6ee1ac items=0 ppid=1875 pid=1887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:55.215000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 01:53:55.216000 audit[1888]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:55.216000 audit[1888]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeddfec880 a2=0 a3=7ffeddfec86c items=0 ppid=1875 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:55.216000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 01:53:55.224000 audit[1890]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:55.224000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff6623f2f0 a2=0 a3=7fff6623f2dc items=0 ppid=1875 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:55.224000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 01:53:55.226000 audit[1892]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:55.226000 audit[1892]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffc17d1df0 a2=0 a3=7fffc17d1ddc items=0 ppid=1875 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:55.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 01:53:55.233094 kubelet[1875]: I1213 01:53:55.233080 1875 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:53:55.232000 audit[1896]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1896 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:55.232000 audit[1896]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcd2a463e0 a2=0 a3=7ffcd2a463cc items=0 ppid=1875 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:55.232000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 13 01:53:55.233000 audit[1898]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1898 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:53:55.233000 audit[1898]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd1f4ab3b0 a2=0 a3=7ffd1f4ab39c items=0 ppid=1875 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:55.233000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 01:53:55.233963 kubelet[1875]: I1213 01:53:55.233943 1875 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:53:55.233995 kubelet[1875]: I1213 01:53:55.233967 1875 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:53:55.233995 kubelet[1875]: I1213 01:53:55.233983 1875 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:53:55.234038 kubelet[1875]: E1213 01:53:55.234026 1875 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:53:55.234000 audit[1900]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1900 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:55.234000 audit[1900]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe24cbcab0 a2=0 a3=7ffe24cbca9c items=0 ppid=1875 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:55.234000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 01:53:55.235000 audit[1902]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1902 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:53:55.235000 audit[1902]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff3e277000 a2=0 a3=7fff3e276fec items=0 ppid=1875 pid=1902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:55.235000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 01:53:55.235000 audit[1901]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:55.235000 audit[1901]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2847a1f0 a2=0 a3=7ffc2847a1dc items=0 ppid=1875 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:55.235000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 01:53:55.236000 audit[1903]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:53:55.236000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffd48b16820 a2=0 a3=7ffd48b1680c items=0 ppid=1875 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:55.236000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 01:53:55.236000 audit[1904]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1904 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:53:55.236000 audit[1904]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe56b3d0d0 a2=0 a3=7ffe56b3d0bc items=0 ppid=1875 pid=1904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:55.236000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 01:53:55.237638 kubelet[1875]: W1213 01:53:55.237600 1875 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Dec 13 01:53:55.237638 kubelet[1875]: E1213 01:53:55.237640 1875 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Dec 13 01:53:55.237760 kubelet[1875]: I1213 01:53:55.237738 1875 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:53:55.237760 kubelet[1875]: I1213 01:53:55.237752 1875 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:53:55.237822 kubelet[1875]: I1213 01:53:55.237766 1875 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:53:55.237000 audit[1905]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:53:55.237000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcdcd954e0 a2=0 a3=7ffcdcd954cc items=0 ppid=1875 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:55.237000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 01:53:55.320377 kubelet[1875]: I1213 01:53:55.320343 1875 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:53:55.320689 kubelet[1875]: E1213 01:53:55.320656 1875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Dec 13 01:53:55.334768 kubelet[1875]: E1213 01:53:55.334726 1875 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:53:55.422490 kubelet[1875]: E1213 01:53:55.422462 1875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="400ms" Dec 13 01:53:55.521788 kubelet[1875]: I1213 01:53:55.521704 1875 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:53:55.522100 kubelet[1875]: E1213 01:53:55.522060 1875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Dec 13 01:53:55.535434 kubelet[1875]: E1213 01:53:55.535398 1875 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:53:55.648610 kubelet[1875]: I1213 01:53:55.648570 1875 policy_none.go:49] "None policy: Start" Dec 13 01:53:55.649374 kubelet[1875]: I1213 01:53:55.649349 1875 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:53:55.649374 kubelet[1875]: I1213 01:53:55.649372 1875 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:53:55.654059 kubelet[1875]: I1213 01:53:55.654018 1875 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:53:55.653000 audit[1875]: AVC avc: denied { mac_admin } for pid=1875 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:53:55.653000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:53:55.653000 audit[1875]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00090d710 a1=c000b90120 a2=c00090d6e0 a3=25 items=0 ppid=1 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:53:55.653000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:53:55.654397 kubelet[1875]: I1213 01:53:55.654128 1875 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 01:53:55.654397 kubelet[1875]: I1213 01:53:55.654341 1875 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:53:55.656813 kubelet[1875]: E1213 01:53:55.656794 1875 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:53:55.823148 kubelet[1875]: E1213 01:53:55.823076 1875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="800ms" Dec 13 01:53:55.923352 kubelet[1875]: I1213 01:53:55.923320 1875 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:53:55.923727 kubelet[1875]: E1213 01:53:55.923657 1875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Dec 13 01:53:55.935890 kubelet[1875]: I1213 01:53:55.935855 1875 topology_manager.go:215] "Topology Admit Handler" podUID="e6faff8d9cae5f67948f520a115814a8" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:53:55.936764 kubelet[1875]: I1213 01:53:55.936744 1875 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:53:55.937381 kubelet[1875]: I1213 01:53:55.937335 1875 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:53:56.023518 kubelet[1875]: I1213 01:53:56.023478 1875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6faff8d9cae5f67948f520a115814a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6faff8d9cae5f67948f520a115814a8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:53:56.023675 kubelet[1875]: I1213 01:53:56.023528 1875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6faff8d9cae5f67948f520a115814a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6faff8d9cae5f67948f520a115814a8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:53:56.023675 kubelet[1875]: I1213 01:53:56.023568 1875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:53:56.023675 kubelet[1875]: I1213 01:53:56.023622 1875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:53:56.023675 kubelet[1875]: I1213 01:53:56.023664 1875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6faff8d9cae5f67948f520a115814a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e6faff8d9cae5f67948f520a115814a8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:53:56.023801 kubelet[1875]: I1213 01:53:56.023683 1875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:53:56.023801 kubelet[1875]: I1213 01:53:56.023702 1875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:53:56.023801 kubelet[1875]: I1213 01:53:56.023725 1875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:53:56.023801 kubelet[1875]: I1213 01:53:56.023752 1875 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:53:56.208630 kubelet[1875]: W1213 01:53:56.208546 1875 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Dec 13 01:53:56.208630 kubelet[1875]: E1213 01:53:56.208592 1875 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Dec 13 01:53:56.214889 kubelet[1875]: W1213 01:53:56.214836 1875 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Dec 13 01:53:56.214940 kubelet[1875]: E1213 01:53:56.214891 1875 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Dec 13 01:53:56.240371 kubelet[1875]: E1213 01:53:56.240355 1875 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:53:56.240677 kubelet[1875]: E1213 01:53:56.240653 1875 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:53:56.240903 env[1318]: time="2024-12-13T01:53:56.240866332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e6faff8d9cae5f67948f520a115814a8,Namespace:kube-system,Attempt:0,}" Dec 13 01:53:56.241143 env[1318]: time="2024-12-13T01:53:56.240901909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 01:53:56.242336 kubelet[1875]: E1213 01:53:56.242301 1875 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:53:56.242660 env[1318]: time="2024-12-13T01:53:56.242620373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 01:53:56.479506 kubelet[1875]: W1213 01:53:56.479392 1875 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Dec 13 01:53:56.479506 kubelet[1875]: E1213 01:53:56.479452 1875 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Dec 13 01:53:56.624264 kubelet[1875]: E1213 01:53:56.624241 1875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="1.6s" Dec 13 01:53:56.724783 kubelet[1875]: I1213 01:53:56.724760 1875 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:53:56.724969 kubelet[1875]: E1213 01:53:56.724952 1875 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Dec 13 01:53:56.805591 kubelet[1875]: W1213 01:53:56.805506 1875 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Dec 13 01:53:56.805591 kubelet[1875]: E1213 01:53:56.805562 1875 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Dec 13 01:53:56.817741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3086384240.mount: Deactivated successfully. Dec 13 01:53:56.823157 env[1318]: time="2024-12-13T01:53:56.823111087Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:56.824828 env[1318]: time="2024-12-13T01:53:56.824791910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:56.826976 env[1318]: time="2024-12-13T01:53:56.826950008Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:56.827913 env[1318]: time="2024-12-13T01:53:56.827865856Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:56.829458 env[1318]: time="2024-12-13T01:53:56.829430761Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:56.830506 env[1318]: time="2024-12-13T01:53:56.830487804Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:56.831770 env[1318]: time="2024-12-13T01:53:56.831746675Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:56.832903 env[1318]: time="2024-12-13T01:53:56.832866856Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:56.833443 env[1318]: time="2024-12-13T01:53:56.833417489Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:56.835223 env[1318]: time="2024-12-13T01:53:56.835200994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:56.836394 env[1318]: time="2024-12-13T01:53:56.836367532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:56.839639 env[1318]: time="2024-12-13T01:53:56.839610596Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:56.860155 env[1318]: time="2024-12-13T01:53:56.859977007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:53:56.860155 env[1318]: time="2024-12-13T01:53:56.860008025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:53:56.860155 env[1318]: time="2024-12-13T01:53:56.860020288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:53:56.860155 env[1318]: time="2024-12-13T01:53:56.859964834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:53:56.860155 env[1318]: time="2024-12-13T01:53:56.860011352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:53:56.860155 env[1318]: time="2024-12-13T01:53:56.860021100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:53:56.860155 env[1318]: time="2024-12-13T01:53:56.860133090Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/33dd56d05afaedd01f28403436da2443b6afb56c4e877c4abcae288c9d59006d pid=1924 runtime=io.containerd.runc.v2 Dec 13 01:53:56.862867 env[1318]: time="2024-12-13T01:53:56.860248827Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d775a80da005645dc7a2035a161dd6c10c87ac49799992b58d44bc0f56a4381 pid=1925 runtime=io.containerd.runc.v2 Dec 13 01:53:56.873254 env[1318]: time="2024-12-13T01:53:56.872997520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:53:56.873254 env[1318]: time="2024-12-13T01:53:56.873033467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:53:56.873254 env[1318]: time="2024-12-13T01:53:56.873046221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:53:56.873254 env[1318]: time="2024-12-13T01:53:56.873158311Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a9a5499e78e7516d9f8d44d81b40e5ba61a42fcb2d1f7a99c1386ec13641b0be pid=1967 runtime=io.containerd.runc.v2 Dec 13 01:53:56.913765 env[1318]: time="2024-12-13T01:53:56.913715876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"33dd56d05afaedd01f28403436da2443b6afb56c4e877c4abcae288c9d59006d\"" Dec 13 01:53:56.915014 kubelet[1875]: E1213 01:53:56.914853 1875 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:53:56.917112 env[1318]: time="2024-12-13T01:53:56.915764929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d775a80da005645dc7a2035a161dd6c10c87ac49799992b58d44bc0f56a4381\"" Dec 13 01:53:56.918157 kubelet[1875]: E1213 01:53:56.918058 1875 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:53:56.919673 env[1318]: time="2024-12-13T01:53:56.919648223Z" level=info msg="CreateContainer within sandbox \"33dd56d05afaedd01f28403436da2443b6afb56c4e877c4abcae288c9d59006d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:53:56.920451 env[1318]: time="2024-12-13T01:53:56.920426834Z" level=info msg="CreateContainer within sandbox \"3d775a80da005645dc7a2035a161dd6c10c87ac49799992b58d44bc0f56a4381\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:53:56.928475 env[1318]: time="2024-12-13T01:53:56.928435656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e6faff8d9cae5f67948f520a115814a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9a5499e78e7516d9f8d44d81b40e5ba61a42fcb2d1f7a99c1386ec13641b0be\"" Dec 13 01:53:56.929252 kubelet[1875]: E1213 01:53:56.929074 1875 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:53:56.931235 env[1318]: time="2024-12-13T01:53:56.931207375Z" level=info msg="CreateContainer within sandbox \"a9a5499e78e7516d9f8d44d81b40e5ba61a42fcb2d1f7a99c1386ec13641b0be\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:53:56.946027 env[1318]: time="2024-12-13T01:53:56.945981056Z" level=info msg="CreateContainer within sandbox \"33dd56d05afaedd01f28403436da2443b6afb56c4e877c4abcae288c9d59006d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e75b4f8c6ee4bb87aebb83314216dcabbdf424b8fcc907c0c39b165497bc042b\"" Dec 13 01:53:56.946415 env[1318]: time="2024-12-13T01:53:56.946388941Z" level=info msg="StartContainer for \"e75b4f8c6ee4bb87aebb83314216dcabbdf424b8fcc907c0c39b165497bc042b\"" Dec 13 01:53:56.953022 env[1318]: time="2024-12-13T01:53:56.952977049Z" level=info msg="CreateContainer within sandbox \"3d775a80da005645dc7a2035a161dd6c10c87ac49799992b58d44bc0f56a4381\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dcdc42d85a8e11b5d64ad6512f7a7c7b3d58a63b6041de1ce5001523ef3aeaa9\"" Dec 13 01:53:56.953448 env[1318]: time="2024-12-13T01:53:56.953411053Z" level=info msg="StartContainer for \"dcdc42d85a8e11b5d64ad6512f7a7c7b3d58a63b6041de1ce5001523ef3aeaa9\"" Dec 13 01:53:56.956306 env[1318]: time="2024-12-13T01:53:56.956250799Z" level=info msg="CreateContainer within sandbox \"a9a5499e78e7516d9f8d44d81b40e5ba61a42fcb2d1f7a99c1386ec13641b0be\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"182784a5076acb757376e846dc7f362e065b7100c882b301af5badd9d3dde0df\"" Dec 13 01:53:56.957211 env[1318]: time="2024-12-13T01:53:56.956886702Z" level=info msg="StartContainer for \"182784a5076acb757376e846dc7f362e065b7100c882b301af5badd9d3dde0df\"" Dec 13 01:53:56.996944 env[1318]: time="2024-12-13T01:53:56.996909353Z" level=info msg="StartContainer for \"e75b4f8c6ee4bb87aebb83314216dcabbdf424b8fcc907c0c39b165497bc042b\" returns successfully" Dec 13 01:53:57.007778 env[1318]: time="2024-12-13T01:53:57.007741893Z" level=info msg="StartContainer for \"182784a5076acb757376e846dc7f362e065b7100c882b301af5badd9d3dde0df\" returns successfully" Dec 13 01:53:57.024102 env[1318]: time="2024-12-13T01:53:57.024021856Z" level=info msg="StartContainer for \"dcdc42d85a8e11b5d64ad6512f7a7c7b3d58a63b6041de1ce5001523ef3aeaa9\" returns successfully" Dec 13 01:53:57.242879 kubelet[1875]: E1213 01:53:57.242768 1875 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:53:57.245060 kubelet[1875]: E1213 01:53:57.245040 1875 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:53:57.252055 kubelet[1875]: E1213 01:53:57.252030 1875 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:53:58.187843 kubelet[1875]: E1213 01:53:58.187795 1875 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 01:53:58.226911 kubelet[1875]: E1213 01:53:58.226886 1875 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:53:58.253329 kubelet[1875]: E1213 01:53:58.253315 1875 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:53:58.326342 kubelet[1875]: I1213 01:53:58.326324 1875 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:53:58.331906 kubelet[1875]: I1213 01:53:58.331892 1875 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:53:58.336904 kubelet[1875]: E1213 01:53:58.336881 1875 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:53:58.437046 kubelet[1875]: E1213 01:53:58.437030 1875 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:53:58.537513 kubelet[1875]: E1213 01:53:58.537425 1875 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:53:58.637986 kubelet[1875]: E1213 01:53:58.637950 1875 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:53:58.738664 kubelet[1875]: E1213 01:53:58.738624 1875 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:53:58.839078 kubelet[1875]: E1213 01:53:58.839041 1875 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:53:58.939437 kubelet[1875]: E1213 01:53:58.939390 1875 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:53:59.210482 kubelet[1875]: I1213 01:53:59.210360 1875 apiserver.go:52] "Watching apiserver" Dec 13 01:53:59.219392 kubelet[1875]: I1213 01:53:59.219357 1875 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:54:00.924402 systemd[1]: Reloading. Dec 13 01:54:00.981232 /usr/lib/systemd/system-generators/torcx-generator[2171]: time="2024-12-13T01:54:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:54:00.981633 /usr/lib/systemd/system-generators/torcx-generator[2171]: time="2024-12-13T01:54:00Z" level=info msg="torcx already run" Dec 13 01:54:01.057399 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:54:01.057416 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:54:01.077063 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:54:01.150445 kubelet[1875]: I1213 01:54:01.150397 1875 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:54:01.150526 systemd[1]: Stopping kubelet.service... Dec 13 01:54:01.175662 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:54:01.176030 systemd[1]: Stopped kubelet.service. Dec 13 01:54:01.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:01.177137 kernel: kauditd_printk_skb: 43 callbacks suppressed Dec 13 01:54:01.177182 kernel: audit: type=1131 audit(1734054841.174:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:01.177629 systemd[1]: Starting kubelet.service... Dec 13 01:54:01.256873 systemd[1]: Started kubelet.service. Dec 13 01:54:01.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:01.263297 kernel: audit: type=1130 audit(1734054841.256:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:01.297368 kubelet[2228]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:01.297790 kubelet[2228]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:54:01.297864 kubelet[2228]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:01.298031 kubelet[2228]: I1213 01:54:01.297995 2228 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:54:01.302944 kubelet[2228]: I1213 01:54:01.302914 2228 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:54:01.302944 kubelet[2228]: I1213 01:54:01.302940 2228 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:54:01.303219 kubelet[2228]: I1213 01:54:01.303129 2228 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:54:01.305333 kubelet[2228]: I1213 01:54:01.305311 2228 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:54:01.308384 kubelet[2228]: I1213 01:54:01.308345 2228 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:54:01.318888 kubelet[2228]: I1213 01:54:01.318831 2228 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:54:01.319489 kubelet[2228]: I1213 01:54:01.319474 2228 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:54:01.319671 kubelet[2228]: I1213 01:54:01.319650 2228 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:54:01.319835 kubelet[2228]: I1213 01:54:01.319683 2228 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:54:01.319835 kubelet[2228]: I1213 01:54:01.319693 2228 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:54:01.319835 kubelet[2228]: I1213 01:54:01.319726 2228 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:01.319975 kubelet[2228]: I1213 01:54:01.319842 2228 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:54:01.319975 kubelet[2228]: I1213 01:54:01.319858 2228 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:54:01.319975 kubelet[2228]: I1213 01:54:01.319881 2228 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:54:01.319975 kubelet[2228]: I1213 01:54:01.319892 2228 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:54:01.320633 kubelet[2228]: I1213 01:54:01.320613 2228 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 01:54:01.320784 kubelet[2228]: I1213 01:54:01.320766 2228 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:54:01.321176 kubelet[2228]: I1213 01:54:01.321156 2228 server.go:1256] "Started kubelet" Dec 13 01:54:01.321000 audit[2228]: AVC avc: denied { mac_admin } for pid=2228 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:54:01.330707 kernel: audit: type=1400 audit(1734054841.321:207): avc: denied { mac_admin } for pid=2228 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:54:01.330841 kernel: audit: type=1401 audit(1734054841.321:207): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:54:01.321000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:54:01.330906 kubelet[2228]: I1213 01:54:01.327217 2228 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:54:01.330906 kubelet[2228]: I1213 01:54:01.327467 2228 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:54:01.330906 kubelet[2228]: I1213 01:54:01.327512 2228 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:54:01.330906 kubelet[2228]: I1213 01:54:01.328323 2228 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:54:01.321000 audit[2228]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0004fbb00 a1=c000908678 a2=c0004fbad0 a3=25 items=0 ppid=1 pid=2228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:01.337619 kernel: audit: type=1300 audit(1734054841.321:207): arch=c000003e syscall=188 success=no exit=-22 a0=c0004fbb00 a1=c000908678 a2=c0004fbad0 a3=25 items=0 ppid=1 pid=2228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:01.337823 kubelet[2228]: I1213 01:54:01.337792 2228 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 01:54:01.337911 kubelet[2228]: I1213 01:54:01.337869 2228 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 01:54:01.337936 kubelet[2228]: I1213 01:54:01.337924 2228 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:54:01.343590 kernel: audit: type=1327 audit(1734054841.321:207): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:54:01.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:54:01.343734 kubelet[2228]: I1213 01:54:01.341247 2228 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:54:01.343734 kubelet[2228]: I1213 01:54:01.341373 2228 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:54:01.343734 kubelet[2228]: I1213 01:54:01.341498 2228 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:54:01.348076 kernel: audit: type=1400 audit(1734054841.336:208): avc: denied { mac_admin } for pid=2228 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:54:01.336000 audit[2228]: AVC avc: denied { mac_admin } for pid=2228 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:54:01.348209 kubelet[2228]: E1213 01:54:01.347889 2228 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:54:01.336000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:54:01.356125 kernel: audit: type=1401 audit(1734054841.336:208): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:54:01.356214 kernel: audit: type=1300 audit(1734054841.336:208): arch=c000003e syscall=188 success=no exit=-22 a0=c0007919e0 a1=c000908690 a2=c0004fbb90 a3=25 items=0 ppid=1 pid=2228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:01.336000 audit[2228]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0007919e0 a1=c000908690 a2=c0004fbb90 a3=25 items=0 ppid=1 pid=2228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:01.336000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:54:01.362304 kernel: audit: type=1327 audit(1734054841.336:208): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:54:01.365120 kubelet[2228]: I1213 01:54:01.365070 2228 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:54:01.365856 kubelet[2228]: I1213 01:54:01.365809 2228 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:54:01.368090 kubelet[2228]: I1213 01:54:01.368071 2228 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:54:01.394429 kubelet[2228]: I1213 01:54:01.394387 2228 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:54:01.395400 kubelet[2228]: I1213 01:54:01.395373 2228 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:54:01.395473 kubelet[2228]: I1213 01:54:01.395415 2228 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:54:01.395473 kubelet[2228]: I1213 01:54:01.395441 2228 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:54:01.395536 kubelet[2228]: E1213 01:54:01.395520 2228 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:54:01.422919 kubelet[2228]: I1213 01:54:01.422878 2228 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:54:01.422919 kubelet[2228]: I1213 01:54:01.422907 2228 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:54:01.422919 kubelet[2228]: I1213 01:54:01.422921 2228 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:01.423121 kubelet[2228]: I1213 01:54:01.423056 2228 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:54:01.423121 kubelet[2228]: I1213 01:54:01.423074 2228 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:54:01.423121 kubelet[2228]: I1213 01:54:01.423079 2228 policy_none.go:49] "None policy: Start" Dec 13 01:54:01.423683 kubelet[2228]: I1213 01:54:01.423661 2228 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:54:01.423683 kubelet[2228]: I1213 01:54:01.423686 2228 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:54:01.423828 kubelet[2228]: I1213 01:54:01.423813 2228 state_mem.go:75] "Updated machine memory state" Dec 13 01:54:01.424738 kubelet[2228]: I1213 01:54:01.424718 2228 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:54:01.423000 audit[2228]: AVC avc: denied { mac_admin } for pid=2228 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:54:01.423000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:54:01.423000 audit[2228]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0013d4990 a1=c001202e10 a2=c0013d4960 a3=25 items=0 ppid=1 pid=2228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:01.423000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:54:01.424967 kubelet[2228]: I1213 01:54:01.424775 2228 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 01:54:01.424967 kubelet[2228]: I1213 01:54:01.424945 2228 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:54:01.444674 kubelet[2228]: I1213 01:54:01.444577 2228 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:54:01.451086 kubelet[2228]: I1213 01:54:01.451047 2228 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:54:01.451198 kubelet[2228]: I1213 01:54:01.451132 2228 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:54:01.496301 kubelet[2228]: I1213 01:54:01.496246 2228 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:54:01.496456 kubelet[2228]: I1213 01:54:01.496355 2228 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:54:01.496456 kubelet[2228]: I1213 01:54:01.496385 2228 topology_manager.go:215] "Topology Admit Handler" podUID="e6faff8d9cae5f67948f520a115814a8" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:54:01.545265 kubelet[2228]: I1213 01:54:01.545220 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:54:01.545265 kubelet[2228]: I1213 01:54:01.545289 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6faff8d9cae5f67948f520a115814a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6faff8d9cae5f67948f520a115814a8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:54:01.545483 kubelet[2228]: I1213 01:54:01.545318 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:54:01.545483 kubelet[2228]: I1213 01:54:01.545347 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:54:01.545483 kubelet[2228]: I1213 01:54:01.545374 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:54:01.545483 kubelet[2228]: I1213 01:54:01.545398 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:54:01.545483 kubelet[2228]: I1213 01:54:01.545421 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:54:01.545590 kubelet[2228]: I1213 01:54:01.545444 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6faff8d9cae5f67948f520a115814a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6faff8d9cae5f67948f520a115814a8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:54:01.545590 kubelet[2228]: I1213 01:54:01.545468 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6faff8d9cae5f67948f520a115814a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e6faff8d9cae5f67948f520a115814a8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:54:01.804281 kubelet[2228]: E1213 01:54:01.804178 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:01.804959 kubelet[2228]: E1213 01:54:01.804611 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:01.808445 kubelet[2228]: E1213 01:54:01.808412 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:02.325726 kubelet[2228]: I1213 01:54:02.325680 2228 apiserver.go:52] "Watching apiserver" Dec 13 01:54:02.342390 kubelet[2228]: I1213 01:54:02.342348 2228 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:54:02.406222 kubelet[2228]: E1213 01:54:02.406196 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:02.406609 kubelet[2228]: E1213 01:54:02.406580 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:02.596832 kubelet[2228]: E1213 01:54:02.596718 2228 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 01:54:02.597411 kubelet[2228]: E1213 01:54:02.597392 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:02.612211 kubelet[2228]: I1213 01:54:02.612145 2228 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.612084472 podStartE2EDuration="1.612084472s" podCreationTimestamp="2024-12-13 01:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:02.606495707 +0000 UTC m=+1.345680388" watchObservedRunningTime="2024-12-13 01:54:02.612084472 +0000 UTC m=+1.351269153" Dec 13 01:54:02.616809 kubelet[2228]: I1213 01:54:02.616760 2228 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6167156029999998 podStartE2EDuration="1.616715603s" podCreationTimestamp="2024-12-13 01:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:02.616640169 +0000 UTC m=+1.355824860" watchObservedRunningTime="2024-12-13 01:54:02.616715603 +0000 UTC m=+1.355900284" Dec 13 01:54:02.641624 kubelet[2228]: I1213 01:54:02.641567 2228 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6415277929999998 podStartE2EDuration="1.641527793s" podCreationTimestamp="2024-12-13 01:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:02.633993691 +0000 UTC m=+1.373178372" watchObservedRunningTime="2024-12-13 01:54:02.641527793 +0000 UTC m=+1.380712464" Dec 13 01:54:03.407770 kubelet[2228]: E1213 01:54:03.407735 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:03.408167 kubelet[2228]: E1213 01:54:03.407827 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:05.862343 sudo[1494]: pam_unix(sudo:session): session closed for user root Dec 13 01:54:05.861000 audit[1494]: USER_END pid=1494 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:54:05.861000 audit[1494]: CRED_DISP pid=1494 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:54:05.863444 sshd[1490]: pam_unix(sshd:session): session closed for user core Dec 13 01:54:05.862000 audit[1490]: USER_END pid=1490 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:05.862000 audit[1490]: CRED_DISP pid=1490 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:05.865167 systemd[1]: sshd@6-10.0.0.88:22-10.0.0.1:43232.service: Deactivated successfully. Dec 13 01:54:05.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.88:22-10.0.0.1:43232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:05.866212 systemd-logind[1304]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:54:05.866213 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:54:05.867122 systemd-logind[1304]: Removed session 7. Dec 13 01:54:07.537193 kubelet[2228]: E1213 01:54:07.537148 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:08.413639 kubelet[2228]: E1213 01:54:08.413606 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:08.881361 kubelet[2228]: E1213 01:54:08.881333 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:09.415432 kubelet[2228]: E1213 01:54:09.415401 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:09.415689 kubelet[2228]: E1213 01:54:09.415545 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:09.632049 update_engine[1307]: I1213 01:54:09.631991 1307 update_attempter.cc:509] Updating boot flags... Dec 13 01:54:11.268629 kubelet[2228]: E1213 01:54:11.268592 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:11.418041 kubelet[2228]: E1213 01:54:11.418015 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:15.196577 kubelet[2228]: I1213 01:54:15.196521 2228 topology_manager.go:215] "Topology Admit Handler" podUID="cb164800-62a0-4164-8b29-e508eb933c48" podNamespace="kube-system" podName="kube-proxy-dw92n" Dec 13 01:54:15.210186 kubelet[2228]: I1213 01:54:15.210140 2228 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:54:15.210515 env[1318]: time="2024-12-13T01:54:15.210472376Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:54:15.210772 kubelet[2228]: I1213 01:54:15.210667 2228 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:54:15.248657 kubelet[2228]: I1213 01:54:15.248609 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb164800-62a0-4164-8b29-e508eb933c48-xtables-lock\") pod \"kube-proxy-dw92n\" (UID: \"cb164800-62a0-4164-8b29-e508eb933c48\") " pod="kube-system/kube-proxy-dw92n" Dec 13 01:54:15.248657 kubelet[2228]: I1213 01:54:15.248648 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb164800-62a0-4164-8b29-e508eb933c48-lib-modules\") pod \"kube-proxy-dw92n\" (UID: \"cb164800-62a0-4164-8b29-e508eb933c48\") " pod="kube-system/kube-proxy-dw92n" Dec 13 01:54:15.248657 kubelet[2228]: I1213 01:54:15.248668 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5dkt\" (UniqueName: \"kubernetes.io/projected/cb164800-62a0-4164-8b29-e508eb933c48-kube-api-access-n5dkt\") pod \"kube-proxy-dw92n\" (UID: \"cb164800-62a0-4164-8b29-e508eb933c48\") " pod="kube-system/kube-proxy-dw92n" Dec 13 01:54:15.248889 kubelet[2228]: I1213 01:54:15.248685 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb164800-62a0-4164-8b29-e508eb933c48-kube-proxy\") pod \"kube-proxy-dw92n\" (UID: \"cb164800-62a0-4164-8b29-e508eb933c48\") " pod="kube-system/kube-proxy-dw92n" Dec 13 01:54:15.270661 kubelet[2228]: I1213 01:54:15.270628 2228 topology_manager.go:215] "Topology Admit Handler" podUID="59a91783-8824-45dd-88fd-cae7479f6a62" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-lzvsx" Dec 13 01:54:15.349887 kubelet[2228]: I1213 01:54:15.349860 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzh47\" (UniqueName: \"kubernetes.io/projected/59a91783-8824-45dd-88fd-cae7479f6a62-kube-api-access-dzh47\") pod \"tigera-operator-c7ccbd65-lzvsx\" (UID: \"59a91783-8824-45dd-88fd-cae7479f6a62\") " pod="tigera-operator/tigera-operator-c7ccbd65-lzvsx" Dec 13 01:54:15.350094 kubelet[2228]: I1213 01:54:15.350077 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/59a91783-8824-45dd-88fd-cae7479f6a62-var-lib-calico\") pod \"tigera-operator-c7ccbd65-lzvsx\" (UID: \"59a91783-8824-45dd-88fd-cae7479f6a62\") " pod="tigera-operator/tigera-operator-c7ccbd65-lzvsx" Dec 13 01:54:15.499849 kubelet[2228]: E1213 01:54:15.499736 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:15.500337 env[1318]: time="2024-12-13T01:54:15.500266531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dw92n,Uid:cb164800-62a0-4164-8b29-e508eb933c48,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:15.517311 env[1318]: time="2024-12-13T01:54:15.517214593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:15.517311 env[1318]: time="2024-12-13T01:54:15.517257324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:15.517311 env[1318]: time="2024-12-13T01:54:15.517277983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:15.517540 env[1318]: time="2024-12-13T01:54:15.517449618Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b50add321f5d7085e52a403d333598752d9613c0a5787dc5ae94292796d31c2 pid=2339 runtime=io.containerd.runc.v2 Dec 13 01:54:15.548060 env[1318]: time="2024-12-13T01:54:15.548005018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dw92n,Uid:cb164800-62a0-4164-8b29-e508eb933c48,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b50add321f5d7085e52a403d333598752d9613c0a5787dc5ae94292796d31c2\"" Dec 13 01:54:15.548679 kubelet[2228]: E1213 01:54:15.548645 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:15.550282 env[1318]: time="2024-12-13T01:54:15.550235009Z" level=info msg="CreateContainer within sandbox \"2b50add321f5d7085e52a403d333598752d9613c0a5787dc5ae94292796d31c2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:54:15.568864 env[1318]: time="2024-12-13T01:54:15.568814098Z" level=info msg="CreateContainer within sandbox \"2b50add321f5d7085e52a403d333598752d9613c0a5787dc5ae94292796d31c2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f0a6f06a684be92a6703696dd9382ebf4030722b8f92434493acb6e62f36f85a\"" Dec 13 01:54:15.570221 env[1318]: time="2024-12-13T01:54:15.570181315Z" level=info msg="StartContainer for \"f0a6f06a684be92a6703696dd9382ebf4030722b8f92434493acb6e62f36f85a\"" Dec 13 01:54:15.573610 env[1318]: time="2024-12-13T01:54:15.573562897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-lzvsx,Uid:59a91783-8824-45dd-88fd-cae7479f6a62,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:54:15.586910 env[1318]: time="2024-12-13T01:54:15.586799082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:15.587045 env[1318]: time="2024-12-13T01:54:15.586880296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:15.587045 env[1318]: time="2024-12-13T01:54:15.586912467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:15.587180 env[1318]: time="2024-12-13T01:54:15.587070326Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8aec66cb1a5a1955166fdb014f83a40aa620256b18f5627a9f68ccd1cdd2f99 pid=2395 runtime=io.containerd.runc.v2 Dec 13 01:54:15.617457 env[1318]: time="2024-12-13T01:54:15.617414646Z" level=info msg="StartContainer for \"f0a6f06a684be92a6703696dd9382ebf4030722b8f92434493acb6e62f36f85a\" returns successfully" Dec 13 01:54:15.638953 env[1318]: time="2024-12-13T01:54:15.638839873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-lzvsx,Uid:59a91783-8824-45dd-88fd-cae7479f6a62,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a8aec66cb1a5a1955166fdb014f83a40aa620256b18f5627a9f68ccd1cdd2f99\"" Dec 13 01:54:15.640533 env[1318]: time="2024-12-13T01:54:15.640130786Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:54:15.685366 kernel: kauditd_printk_skb: 9 callbacks suppressed Dec 13 01:54:15.685496 kernel: audit: type=1325 audit(1734054855.677:215): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.685518 kernel: audit: type=1300 audit(1734054855.677:215): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdc2ab7d30 a2=0 a3=7ffdc2ab7d1c items=0 ppid=2407 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.677000 audit[2470]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.677000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdc2ab7d30 a2=0 a3=7ffdc2ab7d1c items=0 ppid=2407 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.677000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 01:54:15.692814 kernel: audit: type=1327 audit(1734054855.677:215): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 01:54:15.692865 kernel: audit: type=1325 audit(1734054855.678:216): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.678000 audit[2471]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.678000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb9ffc740 a2=0 a3=7fffb9ffc72c items=0 ppid=2407 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.702332 kernel: audit: type=1300 audit(1734054855.678:216): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb9ffc740 a2=0 a3=7fffb9ffc72c items=0 ppid=2407 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.702371 kernel: audit: type=1327 audit(1734054855.678:216): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 01:54:15.678000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 01:54:15.679000 audit[2473]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.708499 kernel: audit: type=1325 audit(1734054855.679:217): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.708549 kernel: audit: type=1300 audit(1734054855.679:217): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddda17410 a2=0 a3=7ffddda173fc items=0 ppid=2407 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.679000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddda17410 a2=0 a3=7ffddda173fc items=0 ppid=2407 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.679000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 01:54:15.717536 kernel: audit: type=1327 audit(1734054855.679:217): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 01:54:15.717590 kernel: audit: type=1325 audit(1734054855.680:218): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.680000 audit[2474]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.680000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd5c40c890 a2=0 a3=7ffd5c40c87c items=0 ppid=2407 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.680000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 01:54:15.682000 audit[2472]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.682000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed64b5040 a2=0 a3=7ffed64b502c items=0 ppid=2407 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.682000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 01:54:15.683000 audit[2475]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.683000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed179ab40 a2=0 a3=7ffed179ab2c items=0 ppid=2407 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.683000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 01:54:15.780000 audit[2476]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.780000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc276767f0 a2=0 a3=7ffc276767dc items=0 ppid=2407 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.780000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 01:54:15.782000 audit[2478]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.782000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc434abeb0 a2=0 a3=7ffc434abe9c items=0 ppid=2407 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.782000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 13 01:54:15.785000 audit[2481]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.785000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff4e5ae5b0 a2=0 a3=7fff4e5ae59c items=0 ppid=2407 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.785000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 13 01:54:15.786000 audit[2482]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.786000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5d58ae60 a2=0 a3=7ffd5d58ae4c items=0 ppid=2407 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.786000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 01:54:15.788000 audit[2484]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.788000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe2becffe0 a2=0 a3=7ffe2becffcc items=0 ppid=2407 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.788000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 01:54:15.789000 audit[2485]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.789000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe738db2b0 a2=0 a3=7ffe738db29c items=0 ppid=2407 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.789000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 01:54:15.791000 audit[2487]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.791000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd4e4739e0 a2=0 a3=7ffd4e4739cc items=0 ppid=2407 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.791000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 01:54:15.794000 audit[2490]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.794000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe647fc480 a2=0 a3=7ffe647fc46c items=0 ppid=2407 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.794000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 13 01:54:15.794000 audit[2491]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.794000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2b49d3d0 a2=0 a3=7ffd2b49d3bc items=0 ppid=2407 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.794000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 01:54:15.796000 audit[2493]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.796000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdfe62b2c0 a2=0 a3=7ffdfe62b2ac items=0 ppid=2407 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.796000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 01:54:15.797000 audit[2494]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.797000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff059f1bb0 a2=0 a3=7fff059f1b9c items=0 ppid=2407 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.797000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 01:54:15.799000 audit[2496]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.799000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd5bfc330 a2=0 a3=7fffd5bfc31c items=0 ppid=2407 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 01:54:15.802000 audit[2499]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.802000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff6483a170 a2=0 a3=7fff6483a15c items=0 ppid=2407 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 01:54:15.805000 audit[2502]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.805000 audit[2502]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffe6dca380 a2=0 a3=7fffe6dca36c items=0 ppid=2407 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.805000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 01:54:15.806000 audit[2503]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.806000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffde735370 a2=0 a3=7fffde73535c items=0 ppid=2407 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.806000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 01:54:15.808000 audit[2505]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.808000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcbcd0f2f0 a2=0 a3=7ffcbcd0f2dc items=0 ppid=2407 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 01:54:15.810000 audit[2508]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.810000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdd6adab50 a2=0 a3=7ffdd6adab3c items=0 ppid=2407 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.810000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 01:54:15.811000 audit[2509]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.811000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff87b63290 a2=0 a3=7fff87b6327c items=0 ppid=2407 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.811000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 01:54:15.813000 audit[2511]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:54:15.813000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc82d23820 a2=0 a3=7ffc82d2380c items=0 ppid=2407 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.813000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 01:54:15.829000 audit[2517]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:54:15.829000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffc5f71b060 a2=0 a3=7ffc5f71b04c items=0 ppid=2407 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.829000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:54:15.838000 audit[2517]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:54:15.838000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc5f71b060 a2=0 a3=7ffc5f71b04c items=0 ppid=2407 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.838000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:54:15.840000 audit[2523]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.840000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffce611d050 a2=0 a3=7ffce611d03c items=0 ppid=2407 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.840000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 01:54:15.842000 audit[2525]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.842000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffed3297400 a2=0 a3=7ffed32973ec items=0 ppid=2407 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 13 01:54:15.845000 audit[2528]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.845000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc58615380 a2=0 a3=7ffc5861536c items=0 ppid=2407 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.845000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 13 01:54:15.846000 audit[2529]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.846000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea4c93a80 a2=0 a3=7ffea4c93a6c items=0 ppid=2407 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.846000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 01:54:15.848000 audit[2531]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.848000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd7ce2e990 a2=0 a3=7ffd7ce2e97c items=0 ppid=2407 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.848000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 01:54:15.849000 audit[2532]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2532 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.849000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffd866fb0 a2=0 a3=7ffffd866f9c items=0 ppid=2407 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.849000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 01:54:15.851000 audit[2534]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2534 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.851000 audit[2534]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdb2c2f060 a2=0 a3=7ffdb2c2f04c items=0 ppid=2407 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.851000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 13 01:54:15.854000 audit[2537]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.854000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc5a8a1ed0 a2=0 a3=7ffc5a8a1ebc items=0 ppid=2407 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.854000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 01:54:15.855000 audit[2538]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.855000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd260fff00 a2=0 a3=7ffd260ffeec items=0 ppid=2407 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.855000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 01:54:15.857000 audit[2540]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2540 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.857000 audit[2540]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd62c21760 a2=0 a3=7ffd62c2174c items=0 ppid=2407 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.857000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 01:54:15.858000 audit[2541]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2541 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.858000 audit[2541]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd693eaf0 a2=0 a3=7ffcd693eadc items=0 ppid=2407 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.858000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 01:54:15.860000 audit[2543]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2543 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.860000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd5ce54470 a2=0 a3=7ffd5ce5445c items=0 ppid=2407 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.860000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 01:54:15.863000 audit[2546]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.863000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe2a7838b0 a2=0 a3=7ffe2a78389c items=0 ppid=2407 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.863000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 01:54:15.866000 audit[2549]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2549 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.866000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe23144b30 a2=0 a3=7ffe23144b1c items=0 ppid=2407 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.866000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 13 01:54:15.867000 audit[2550]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2550 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.867000 audit[2550]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcbe937d50 a2=0 a3=7ffcbe937d3c items=0 ppid=2407 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.867000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 01:54:15.868000 audit[2552]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2552 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.868000 audit[2552]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffde6609360 a2=0 a3=7ffde660934c items=0 ppid=2407 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.868000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 01:54:15.871000 audit[2555]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2555 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.871000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffde78a4f90 a2=0 a3=7ffde78a4f7c items=0 ppid=2407 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.871000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 01:54:15.872000 audit[2556]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.872000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc42facf50 a2=0 a3=7ffc42facf3c items=0 ppid=2407 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 01:54:15.874000 audit[2558]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.874000 audit[2558]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcc7dea1c0 a2=0 a3=7ffcc7dea1ac items=0 ppid=2407 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.874000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 01:54:15.875000 audit[2559]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2559 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.875000 audit[2559]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5d4869f0 a2=0 a3=7ffc5d4869dc items=0 ppid=2407 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.875000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 01:54:15.877000 audit[2561]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2561 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.877000 audit[2561]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd283f2e00 a2=0 a3=7ffd283f2dec items=0 ppid=2407 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.877000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 01:54:15.879000 audit[2564]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2564 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:54:15.879000 audit[2564]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff002b7850 a2=0 a3=7fff002b783c items=0 ppid=2407 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.879000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 01:54:15.882000 audit[2566]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2566 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 01:54:15.882000 audit[2566]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffd1eb13950 a2=0 a3=7ffd1eb1393c items=0 ppid=2407 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.882000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:54:15.882000 audit[2566]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2566 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 01:54:15.882000 audit[2566]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd1eb13950 a2=0 a3=7ffd1eb1393c items=0 ppid=2407 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:15.882000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:54:16.425903 kubelet[2228]: E1213 01:54:16.425872 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:16.434472 kubelet[2228]: I1213 01:54:16.434444 2228 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dw92n" podStartSLOduration=1.434409316 podStartE2EDuration="1.434409316s" podCreationTimestamp="2024-12-13 01:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:16.434340877 +0000 UTC m=+15.173525558" watchObservedRunningTime="2024-12-13 01:54:16.434409316 +0000 UTC m=+15.173593997" Dec 13 01:54:17.017360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3033012582.mount: Deactivated successfully. Dec 13 01:54:17.943727 env[1318]: time="2024-12-13T01:54:17.943674074Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:17.945566 env[1318]: time="2024-12-13T01:54:17.945530365Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:17.947116 env[1318]: time="2024-12-13T01:54:17.947070096Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:17.948470 env[1318]: time="2024-12-13T01:54:17.948427533Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:17.949131 env[1318]: time="2024-12-13T01:54:17.949096067Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:54:17.951144 env[1318]: time="2024-12-13T01:54:17.951097131Z" level=info msg="CreateContainer within sandbox \"a8aec66cb1a5a1955166fdb014f83a40aa620256b18f5627a9f68ccd1cdd2f99\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:54:17.963433 env[1318]: time="2024-12-13T01:54:17.963359898Z" level=info msg="CreateContainer within sandbox \"a8aec66cb1a5a1955166fdb014f83a40aa620256b18f5627a9f68ccd1cdd2f99\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"86363210d14ce9bc78d575087f75ca5595f3bd995a06cb6d8d71eaf4ea2d700e\"" Dec 13 01:54:17.963930 env[1318]: time="2024-12-13T01:54:17.963719378Z" level=info msg="StartContainer for \"86363210d14ce9bc78d575087f75ca5595f3bd995a06cb6d8d71eaf4ea2d700e\"" Dec 13 01:54:18.367714 env[1318]: time="2024-12-13T01:54:18.367661037Z" level=info msg="StartContainer for \"86363210d14ce9bc78d575087f75ca5595f3bd995a06cb6d8d71eaf4ea2d700e\" returns successfully" Dec 13 01:54:20.724000 audit[2606]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2606 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:54:20.727042 kernel: kauditd_printk_skb: 143 callbacks suppressed Dec 13 01:54:20.727100 kernel: audit: type=1325 audit(1734054860.724:266): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2606 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:54:20.724000 audit[2606]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff6ce5cbb0 a2=0 a3=7fff6ce5cb9c items=0 ppid=2407 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:20.734093 kernel: audit: type=1300 audit(1734054860.724:266): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff6ce5cbb0 a2=0 a3=7fff6ce5cb9c items=0 ppid=2407 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:20.734159 kernel: audit: type=1327 audit(1734054860.724:266): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:54:20.724000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:54:20.736000 audit[2606]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2606 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:54:20.736000 audit[2606]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff6ce5cbb0 a2=0 a3=0 items=0 ppid=2407 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:20.744798 kernel: audit: type=1325 audit(1734054860.736:267): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2606 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:54:20.744899 kernel: audit: type=1300 audit(1734054860.736:267): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff6ce5cbb0 a2=0 a3=0 items=0 ppid=2407 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:20.744931 kernel: audit: type=1327 audit(1734054860.736:267): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:54:20.736000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:54:20.750000 audit[2608]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:54:20.750000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdcde1d3c0 a2=0 a3=7ffdcde1d3ac items=0 ppid=2407 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:20.758976 kernel: audit: type=1325 audit(1734054860.750:268): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:54:20.759031 kernel: audit: type=1300 audit(1734054860.750:268): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdcde1d3c0 a2=0 a3=7ffdcde1d3ac items=0 ppid=2407 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:20.759056 kernel: audit: type=1327 audit(1734054860.750:268): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:54:20.750000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:54:20.761000 audit[2608]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:54:20.761000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdcde1d3c0 a2=0 a3=0 items=0 ppid=2407 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:20.761000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:54:20.765303 kernel: audit: type=1325 audit(1734054860.761:269): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:54:20.850034 kubelet[2228]: I1213 01:54:20.850002 2228 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-lzvsx" podStartSLOduration=3.540267612 podStartE2EDuration="5.849954498s" podCreationTimestamp="2024-12-13 01:54:15 +0000 UTC" firstStartedPulling="2024-12-13 01:54:15.639788048 +0000 UTC m=+14.378972729" lastFinishedPulling="2024-12-13 01:54:17.949474934 +0000 UTC m=+16.688659615" observedRunningTime="2024-12-13 01:54:18.437546545 +0000 UTC m=+17.176731226" watchObservedRunningTime="2024-12-13 01:54:20.849954498 +0000 UTC m=+19.589139180" Dec 13 01:54:20.851184 kubelet[2228]: I1213 01:54:20.851164 2228 topology_manager.go:215] "Topology Admit Handler" podUID="154e05db-bad6-4f25-9e8e-f41ec452f36f" podNamespace="calico-system" podName="calico-typha-bb4f7fc99-6g5bb" Dec 13 01:54:20.884678 kubelet[2228]: I1213 01:54:20.884643 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr2ch\" (UniqueName: \"kubernetes.io/projected/154e05db-bad6-4f25-9e8e-f41ec452f36f-kube-api-access-dr2ch\") pod \"calico-typha-bb4f7fc99-6g5bb\" (UID: \"154e05db-bad6-4f25-9e8e-f41ec452f36f\") " pod="calico-system/calico-typha-bb4f7fc99-6g5bb" Dec 13 01:54:20.884678 kubelet[2228]: I1213 01:54:20.884685 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/154e05db-bad6-4f25-9e8e-f41ec452f36f-tigera-ca-bundle\") pod \"calico-typha-bb4f7fc99-6g5bb\" (UID: \"154e05db-bad6-4f25-9e8e-f41ec452f36f\") " pod="calico-system/calico-typha-bb4f7fc99-6g5bb" Dec 13 01:54:20.884878 kubelet[2228]: I1213 01:54:20.884705 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/154e05db-bad6-4f25-9e8e-f41ec452f36f-typha-certs\") pod \"calico-typha-bb4f7fc99-6g5bb\" (UID: \"154e05db-bad6-4f25-9e8e-f41ec452f36f\") " pod="calico-system/calico-typha-bb4f7fc99-6g5bb" Dec 13 01:54:20.904511 kubelet[2228]: I1213 01:54:20.904460 2228 topology_manager.go:215] "Topology Admit Handler" podUID="1700d6cc-17fc-42bf-b164-298c2c341d88" podNamespace="calico-system" podName="calico-node-7wdql" Dec 13 01:54:20.985946 kubelet[2228]: I1213 01:54:20.985815 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-xtables-lock\") pod \"calico-node-7wdql\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " pod="calico-system/calico-node-7wdql" Dec 13 01:54:20.985946 kubelet[2228]: I1213 01:54:20.985863 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-cni-log-dir\") pod \"calico-node-7wdql\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " pod="calico-system/calico-node-7wdql" Dec 13 01:54:20.985946 kubelet[2228]: I1213 01:54:20.985894 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1700d6cc-17fc-42bf-b164-298c2c341d88-node-certs\") pod \"calico-node-7wdql\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " pod="calico-system/calico-node-7wdql" Dec 13 01:54:20.986172 kubelet[2228]: I1213 01:54:20.985954 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-flexvol-driver-host\") pod \"calico-node-7wdql\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " pod="calico-system/calico-node-7wdql" Dec 13 01:54:20.986172 kubelet[2228]: I1213 01:54:20.986057 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-lib-modules\") pod \"calico-node-7wdql\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " pod="calico-system/calico-node-7wdql" Dec 13 01:54:20.986172 kubelet[2228]: I1213 01:54:20.986076 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-cni-net-dir\") pod \"calico-node-7wdql\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " pod="calico-system/calico-node-7wdql" Dec 13 01:54:20.986172 kubelet[2228]: I1213 01:54:20.986103 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-policysync\") pod \"calico-node-7wdql\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " pod="calico-system/calico-node-7wdql" Dec 13 01:54:20.986172 kubelet[2228]: I1213 01:54:20.986122 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1700d6cc-17fc-42bf-b164-298c2c341d88-tigera-ca-bundle\") pod \"calico-node-7wdql\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " pod="calico-system/calico-node-7wdql" Dec 13 01:54:20.986400 kubelet[2228]: I1213 01:54:20.986139 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-cni-bin-dir\") pod \"calico-node-7wdql\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " pod="calico-system/calico-node-7wdql" Dec 13 01:54:20.986400 kubelet[2228]: I1213 01:54:20.986191 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-var-run-calico\") pod \"calico-node-7wdql\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " pod="calico-system/calico-node-7wdql" Dec 13 01:54:20.986400 kubelet[2228]: I1213 01:54:20.986209 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-var-lib-calico\") pod \"calico-node-7wdql\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " pod="calico-system/calico-node-7wdql" Dec 13 01:54:20.986400 kubelet[2228]: I1213 01:54:20.986227 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gcqs\" (UniqueName: \"kubernetes.io/projected/1700d6cc-17fc-42bf-b164-298c2c341d88-kube-api-access-2gcqs\") pod \"calico-node-7wdql\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " pod="calico-system/calico-node-7wdql" Dec 13 01:54:21.022578 kubelet[2228]: I1213 01:54:21.022545 2228 topology_manager.go:215] "Topology Admit Handler" podUID="7369f4a7-4a25-4cba-bc4e-08b9ad330777" podNamespace="calico-system" podName="csi-node-driver-t2vq9" Dec 13 01:54:21.023056 kubelet[2228]: E1213 01:54:21.023036 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t2vq9" podUID="7369f4a7-4a25-4cba-bc4e-08b9ad330777" Dec 13 01:54:21.087451 kubelet[2228]: I1213 01:54:21.087404 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7369f4a7-4a25-4cba-bc4e-08b9ad330777-varrun\") pod \"csi-node-driver-t2vq9\" (UID: \"7369f4a7-4a25-4cba-bc4e-08b9ad330777\") " pod="calico-system/csi-node-driver-t2vq9" Dec 13 01:54:21.087635 kubelet[2228]: I1213 01:54:21.087610 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7369f4a7-4a25-4cba-bc4e-08b9ad330777-registration-dir\") pod \"csi-node-driver-t2vq9\" (UID: \"7369f4a7-4a25-4cba-bc4e-08b9ad330777\") " pod="calico-system/csi-node-driver-t2vq9" Dec 13 01:54:21.087684 kubelet[2228]: I1213 01:54:21.087650 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7369f4a7-4a25-4cba-bc4e-08b9ad330777-socket-dir\") pod \"csi-node-driver-t2vq9\" (UID: \"7369f4a7-4a25-4cba-bc4e-08b9ad330777\") " pod="calico-system/csi-node-driver-t2vq9" Dec 13 01:54:21.087757 kubelet[2228]: I1213 01:54:21.087739 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7369f4a7-4a25-4cba-bc4e-08b9ad330777-kubelet-dir\") pod \"csi-node-driver-t2vq9\" (UID: \"7369f4a7-4a25-4cba-bc4e-08b9ad330777\") " pod="calico-system/csi-node-driver-t2vq9" Dec 13 01:54:21.087801 kubelet[2228]: I1213 01:54:21.087770 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrz5b\" (UniqueName: \"kubernetes.io/projected/7369f4a7-4a25-4cba-bc4e-08b9ad330777-kube-api-access-qrz5b\") pod \"csi-node-driver-t2vq9\" (UID: \"7369f4a7-4a25-4cba-bc4e-08b9ad330777\") " pod="calico-system/csi-node-driver-t2vq9" Dec 13 01:54:21.095822 kubelet[2228]: E1213 01:54:21.095794 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.095822 kubelet[2228]: W1213 01:54:21.095817 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.095966 kubelet[2228]: E1213 01:54:21.095850 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.102305 kubelet[2228]: E1213 01:54:21.102278 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.102305 kubelet[2228]: W1213 01:54:21.102300 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.102443 kubelet[2228]: E1213 01:54:21.102323 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.156972 kubelet[2228]: E1213 01:54:21.156937 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:21.157708 env[1318]: time="2024-12-13T01:54:21.157666536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bb4f7fc99-6g5bb,Uid:154e05db-bad6-4f25-9e8e-f41ec452f36f,Namespace:calico-system,Attempt:0,}" Dec 13 01:54:21.182040 env[1318]: time="2024-12-13T01:54:21.181960149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:21.182168 env[1318]: time="2024-12-13T01:54:21.182065718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:21.182168 env[1318]: time="2024-12-13T01:54:21.182096545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:21.182439 env[1318]: time="2024-12-13T01:54:21.182390952Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/df8fc89f8ad324ff42ea575be2e58a67d60111ce4d80552d4693533223f4b63c pid=2622 runtime=io.containerd.runc.v2 Dec 13 01:54:21.190325 kubelet[2228]: E1213 01:54:21.190286 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.190325 kubelet[2228]: W1213 01:54:21.190310 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.190495 kubelet[2228]: E1213 01:54:21.190338 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.191097 kubelet[2228]: E1213 01:54:21.190650 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.191097 kubelet[2228]: W1213 01:54:21.190661 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.191097 kubelet[2228]: E1213 01:54:21.190698 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.191097 kubelet[2228]: E1213 01:54:21.190971 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.191097 kubelet[2228]: W1213 01:54:21.191000 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.191097 kubelet[2228]: E1213 01:54:21.191026 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.191339 kubelet[2228]: E1213 01:54:21.191290 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.191339 kubelet[2228]: W1213 01:54:21.191297 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.191339 kubelet[2228]: E1213 01:54:21.191312 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.192688 kubelet[2228]: E1213 01:54:21.192647 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.192688 kubelet[2228]: W1213 01:54:21.192665 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.192688 kubelet[2228]: E1213 01:54:21.192686 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.192989 kubelet[2228]: E1213 01:54:21.192967 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.192989 kubelet[2228]: W1213 01:54:21.192982 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.193090 kubelet[2228]: E1213 01:54:21.193077 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.193212 kubelet[2228]: E1213 01:54:21.193193 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.193212 kubelet[2228]: W1213 01:54:21.193207 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.193331 kubelet[2228]: E1213 01:54:21.193306 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.193486 kubelet[2228]: E1213 01:54:21.193463 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.193544 kubelet[2228]: W1213 01:54:21.193493 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.193544 kubelet[2228]: E1213 01:54:21.193518 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.193740 kubelet[2228]: E1213 01:54:21.193722 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.193740 kubelet[2228]: W1213 01:54:21.193735 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.193838 kubelet[2228]: E1213 01:54:21.193767 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.193975 kubelet[2228]: E1213 01:54:21.193956 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.193975 kubelet[2228]: W1213 01:54:21.193969 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.194060 kubelet[2228]: E1213 01:54:21.193997 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.194209 kubelet[2228]: E1213 01:54:21.194191 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.194209 kubelet[2228]: W1213 01:54:21.194203 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.194314 kubelet[2228]: E1213 01:54:21.194226 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.195322 kubelet[2228]: E1213 01:54:21.195302 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.195383 kubelet[2228]: W1213 01:54:21.195321 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.195383 kubelet[2228]: E1213 01:54:21.195347 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.195609 kubelet[2228]: E1213 01:54:21.195583 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.195609 kubelet[2228]: W1213 01:54:21.195608 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.195728 kubelet[2228]: E1213 01:54:21.195702 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.195864 kubelet[2228]: E1213 01:54:21.195845 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.195917 kubelet[2228]: W1213 01:54:21.195873 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.195990 kubelet[2228]: E1213 01:54:21.195969 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.196141 kubelet[2228]: E1213 01:54:21.196122 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.196141 kubelet[2228]: W1213 01:54:21.196136 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.196233 kubelet[2228]: E1213 01:54:21.196160 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.196957 kubelet[2228]: E1213 01:54:21.196938 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.196957 kubelet[2228]: W1213 01:54:21.196952 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.197048 kubelet[2228]: E1213 01:54:21.196988 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.198718 kubelet[2228]: E1213 01:54:21.198697 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.198718 kubelet[2228]: W1213 01:54:21.198712 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.198845 kubelet[2228]: E1213 01:54:21.198815 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.198978 kubelet[2228]: E1213 01:54:21.198958 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.198978 kubelet[2228]: W1213 01:54:21.198977 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.199147 kubelet[2228]: E1213 01:54:21.199129 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.199245 kubelet[2228]: E1213 01:54:21.199229 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.199324 kubelet[2228]: W1213 01:54:21.199244 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.199324 kubelet[2228]: E1213 01:54:21.199279 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.199531 kubelet[2228]: E1213 01:54:21.199515 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.199531 kubelet[2228]: W1213 01:54:21.199527 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.199621 kubelet[2228]: E1213 01:54:21.199550 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.199765 kubelet[2228]: E1213 01:54:21.199744 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.199765 kubelet[2228]: W1213 01:54:21.199756 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.199867 kubelet[2228]: E1213 01:54:21.199778 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.200012 kubelet[2228]: E1213 01:54:21.199996 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.200012 kubelet[2228]: W1213 01:54:21.200009 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.200102 kubelet[2228]: E1213 01:54:21.200041 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.200265 kubelet[2228]: E1213 01:54:21.200248 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.200353 kubelet[2228]: W1213 01:54:21.200260 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.200353 kubelet[2228]: E1213 01:54:21.200324 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.200522 kubelet[2228]: E1213 01:54:21.200503 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.200522 kubelet[2228]: W1213 01:54:21.200516 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.200612 kubelet[2228]: E1213 01:54:21.200535 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.200747 kubelet[2228]: E1213 01:54:21.200728 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.200747 kubelet[2228]: W1213 01:54:21.200741 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.200840 kubelet[2228]: E1213 01:54:21.200760 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.201596 kubelet[2228]: E1213 01:54:21.201578 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:21.201596 kubelet[2228]: W1213 01:54:21.201592 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:21.201596 kubelet[2228]: E1213 01:54:21.201611 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:21.206710 kubelet[2228]: E1213 01:54:21.206684 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:21.207684 env[1318]: time="2024-12-13T01:54:21.207634336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7wdql,Uid:1700d6cc-17fc-42bf-b164-298c2c341d88,Namespace:calico-system,Attempt:0,}" Dec 13 01:54:21.231013 env[1318]: time="2024-12-13T01:54:21.230978085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bb4f7fc99-6g5bb,Uid:154e05db-bad6-4f25-9e8e-f41ec452f36f,Namespace:calico-system,Attempt:0,} returns sandbox id \"df8fc89f8ad324ff42ea575be2e58a67d60111ce4d80552d4693533223f4b63c\"" Dec 13 01:54:21.233305 kubelet[2228]: E1213 01:54:21.233132 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:21.235742 env[1318]: time="2024-12-13T01:54:21.235647939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:54:21.264756 env[1318]: time="2024-12-13T01:54:21.264598932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:21.264756 env[1318]: time="2024-12-13T01:54:21.264647704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:21.264756 env[1318]: time="2024-12-13T01:54:21.264661019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:21.265183 env[1318]: time="2024-12-13T01:54:21.265105448Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab pid=2690 runtime=io.containerd.runc.v2 Dec 13 01:54:21.294182 env[1318]: time="2024-12-13T01:54:21.294138465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7wdql,Uid:1700d6cc-17fc-42bf-b164-298c2c341d88,Namespace:calico-system,Attempt:0,} returns sandbox id \"d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab\"" Dec 13 01:54:21.294685 kubelet[2228]: E1213 01:54:21.294668 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:21.770000 audit[2724]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2724 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:54:21.770000 audit[2724]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7fff5c97ef70 a2=0 a3=7fff5c97ef5c items=0 ppid=2407 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:21.770000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:54:21.776000 audit[2724]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2724 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:54:21.776000 audit[2724]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff5c97ef70 a2=0 a3=0 items=0 ppid=2407 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:21.776000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:54:22.396306 kubelet[2228]: E1213 01:54:22.396251 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t2vq9" podUID="7369f4a7-4a25-4cba-bc4e-08b9ad330777" Dec 13 01:54:22.568607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount256454348.mount: Deactivated successfully. Dec 13 01:54:23.433804 env[1318]: time="2024-12-13T01:54:23.433750903Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:23.435881 env[1318]: time="2024-12-13T01:54:23.435849832Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:23.437417 env[1318]: time="2024-12-13T01:54:23.437391791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:23.438855 env[1318]: time="2024-12-13T01:54:23.438828840Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:23.439403 env[1318]: time="2024-12-13T01:54:23.439373027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:54:23.439769 env[1318]: time="2024-12-13T01:54:23.439747413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:54:23.446385 env[1318]: time="2024-12-13T01:54:23.446340217Z" level=info msg="CreateContainer within sandbox \"df8fc89f8ad324ff42ea575be2e58a67d60111ce4d80552d4693533223f4b63c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:54:23.461745 env[1318]: time="2024-12-13T01:54:23.461683545Z" level=info msg="CreateContainer within sandbox \"df8fc89f8ad324ff42ea575be2e58a67d60111ce4d80552d4693533223f4b63c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5095b2c9c36508b3dd05006fa40aaec7eff632cfffc567a6cc6144d42166ec47\"" Dec 13 01:54:23.462387 env[1318]: time="2024-12-13T01:54:23.462358247Z" level=info msg="StartContainer for \"5095b2c9c36508b3dd05006fa40aaec7eff632cfffc567a6cc6144d42166ec47\"" Dec 13 01:54:23.516856 env[1318]: time="2024-12-13T01:54:23.516778072Z" level=info msg="StartContainer for \"5095b2c9c36508b3dd05006fa40aaec7eff632cfffc567a6cc6144d42166ec47\" returns successfully" Dec 13 01:54:24.396028 kubelet[2228]: E1213 01:54:24.395984 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t2vq9" podUID="7369f4a7-4a25-4cba-bc4e-08b9ad330777" Dec 13 01:54:24.439041 kubelet[2228]: E1213 01:54:24.439012 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:24.449297 kubelet[2228]: I1213 01:54:24.448997 2228 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-bb4f7fc99-6g5bb" podStartSLOduration=2.243082347 podStartE2EDuration="4.448944608s" podCreationTimestamp="2024-12-13 01:54:20 +0000 UTC" firstStartedPulling="2024-12-13 01:54:21.233729779 +0000 UTC m=+19.972914450" lastFinishedPulling="2024-12-13 01:54:23.43959202 +0000 UTC m=+22.178776711" observedRunningTime="2024-12-13 01:54:24.448203991 +0000 UTC m=+23.187388672" watchObservedRunningTime="2024-12-13 01:54:24.448944608 +0000 UTC m=+23.188129289" Dec 13 01:54:24.491139 kubelet[2228]: E1213 01:54:24.491118 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.491139 kubelet[2228]: W1213 01:54:24.491136 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.491289 kubelet[2228]: E1213 01:54:24.491156 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.491321 kubelet[2228]: E1213 01:54:24.491309 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.491321 kubelet[2228]: W1213 01:54:24.491315 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.491363 kubelet[2228]: E1213 01:54:24.491324 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.491474 kubelet[2228]: E1213 01:54:24.491451 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.491474 kubelet[2228]: W1213 01:54:24.491460 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.491474 kubelet[2228]: E1213 01:54:24.491469 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.491670 kubelet[2228]: E1213 01:54:24.491606 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.491670 kubelet[2228]: W1213 01:54:24.491612 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.491670 kubelet[2228]: E1213 01:54:24.491620 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.491796 kubelet[2228]: E1213 01:54:24.491778 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.491796 kubelet[2228]: W1213 01:54:24.491788 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.491796 kubelet[2228]: E1213 01:54:24.491797 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.491934 kubelet[2228]: E1213 01:54:24.491922 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.491934 kubelet[2228]: W1213 01:54:24.491930 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.491934 kubelet[2228]: E1213 01:54:24.491938 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.492065 kubelet[2228]: E1213 01:54:24.492054 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.492065 kubelet[2228]: W1213 01:54:24.492065 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.492114 kubelet[2228]: E1213 01:54:24.492073 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.492226 kubelet[2228]: E1213 01:54:24.492214 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.492226 kubelet[2228]: W1213 01:54:24.492223 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.492288 kubelet[2228]: E1213 01:54:24.492231 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.492380 kubelet[2228]: E1213 01:54:24.492369 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.492380 kubelet[2228]: W1213 01:54:24.492378 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.492437 kubelet[2228]: E1213 01:54:24.492386 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.492513 kubelet[2228]: E1213 01:54:24.492502 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.492513 kubelet[2228]: W1213 01:54:24.492511 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.492565 kubelet[2228]: E1213 01:54:24.492519 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.492645 kubelet[2228]: E1213 01:54:24.492633 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.492645 kubelet[2228]: W1213 01:54:24.492641 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.492700 kubelet[2228]: E1213 01:54:24.492649 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.492797 kubelet[2228]: E1213 01:54:24.492785 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.492797 kubelet[2228]: W1213 01:54:24.492793 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.492861 kubelet[2228]: E1213 01:54:24.492801 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.492961 kubelet[2228]: E1213 01:54:24.492949 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.492961 kubelet[2228]: W1213 01:54:24.492957 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.493010 kubelet[2228]: E1213 01:54:24.492967 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.493118 kubelet[2228]: E1213 01:54:24.493107 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.493118 kubelet[2228]: W1213 01:54:24.493115 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.493165 kubelet[2228]: E1213 01:54:24.493123 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.493255 kubelet[2228]: E1213 01:54:24.493244 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.493255 kubelet[2228]: W1213 01:54:24.493252 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.493329 kubelet[2228]: E1213 01:54:24.493261 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.519635 kubelet[2228]: E1213 01:54:24.519607 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.519635 kubelet[2228]: W1213 01:54:24.519625 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.519712 kubelet[2228]: E1213 01:54:24.519646 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.519829 kubelet[2228]: E1213 01:54:24.519808 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.519829 kubelet[2228]: W1213 01:54:24.519818 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.519829 kubelet[2228]: E1213 01:54:24.519832 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.520040 kubelet[2228]: E1213 01:54:24.520016 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.520040 kubelet[2228]: W1213 01:54:24.520036 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.520091 kubelet[2228]: E1213 01:54:24.520059 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.520228 kubelet[2228]: E1213 01:54:24.520211 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.520228 kubelet[2228]: W1213 01:54:24.520222 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.520319 kubelet[2228]: E1213 01:54:24.520239 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.520422 kubelet[2228]: E1213 01:54:24.520410 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.520422 kubelet[2228]: W1213 01:54:24.520419 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.520472 kubelet[2228]: E1213 01:54:24.520434 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.520582 kubelet[2228]: E1213 01:54:24.520571 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.520582 kubelet[2228]: W1213 01:54:24.520580 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.520634 kubelet[2228]: E1213 01:54:24.520593 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.520816 kubelet[2228]: E1213 01:54:24.520801 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.520816 kubelet[2228]: W1213 01:54:24.520812 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.520889 kubelet[2228]: E1213 01:54:24.520826 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.521058 kubelet[2228]: E1213 01:54:24.521042 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.521058 kubelet[2228]: W1213 01:54:24.521055 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.521111 kubelet[2228]: E1213 01:54:24.521072 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.521239 kubelet[2228]: E1213 01:54:24.521226 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.521239 kubelet[2228]: W1213 01:54:24.521237 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.521306 kubelet[2228]: E1213 01:54:24.521286 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.521434 kubelet[2228]: E1213 01:54:24.521421 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.521461 kubelet[2228]: W1213 01:54:24.521433 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.521491 kubelet[2228]: E1213 01:54:24.521462 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.521584 kubelet[2228]: E1213 01:54:24.521571 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.521584 kubelet[2228]: W1213 01:54:24.521581 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.521632 kubelet[2228]: E1213 01:54:24.521595 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.521732 kubelet[2228]: E1213 01:54:24.521716 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.521732 kubelet[2228]: W1213 01:54:24.521725 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.521844 kubelet[2228]: E1213 01:54:24.521740 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.521892 kubelet[2228]: E1213 01:54:24.521879 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.521892 kubelet[2228]: W1213 01:54:24.521888 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.521944 kubelet[2228]: E1213 01:54:24.521900 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.522122 kubelet[2228]: E1213 01:54:24.522106 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.522122 kubelet[2228]: W1213 01:54:24.522117 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.522200 kubelet[2228]: E1213 01:54:24.522132 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.522291 kubelet[2228]: E1213 01:54:24.522264 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.522291 kubelet[2228]: W1213 01:54:24.522283 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.522346 kubelet[2228]: E1213 01:54:24.522295 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.522492 kubelet[2228]: E1213 01:54:24.522468 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.522492 kubelet[2228]: W1213 01:54:24.522486 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.522620 kubelet[2228]: E1213 01:54:24.522516 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.522744 kubelet[2228]: E1213 01:54:24.522728 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.522744 kubelet[2228]: W1213 01:54:24.522740 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.522805 kubelet[2228]: E1213 01:54:24.522753 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.522944 kubelet[2228]: E1213 01:54:24.522932 2228 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:24.522944 kubelet[2228]: W1213 01:54:24.522942 2228 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:24.522996 kubelet[2228]: E1213 01:54:24.522952 2228 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:24.955066 env[1318]: time="2024-12-13T01:54:24.955019869Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:24.956705 env[1318]: time="2024-12-13T01:54:24.956682554Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:24.958103 env[1318]: time="2024-12-13T01:54:24.958066382Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:24.959319 env[1318]: time="2024-12-13T01:54:24.959292043Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:24.959647 env[1318]: time="2024-12-13T01:54:24.959609902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:54:24.961071 env[1318]: time="2024-12-13T01:54:24.961045179Z" level=info msg="CreateContainer within sandbox \"d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:54:24.973862 env[1318]: time="2024-12-13T01:54:24.973827107Z" level=info msg="CreateContainer within sandbox \"d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"262ba1a55ecdc38ae02185a6a149dbf708e03b5f6a6cf08307dbde097646d480\"" Dec 13 01:54:24.975085 env[1318]: time="2024-12-13T01:54:24.974257409Z" level=info msg="StartContainer for \"262ba1a55ecdc38ae02185a6a149dbf708e03b5f6a6cf08307dbde097646d480\"" Dec 13 01:54:25.033755 env[1318]: time="2024-12-13T01:54:25.033701422Z" level=info msg="StartContainer for \"262ba1a55ecdc38ae02185a6a149dbf708e03b5f6a6cf08307dbde097646d480\" returns successfully" Dec 13 01:54:25.228321 env[1318]: time="2024-12-13T01:54:25.228181478Z" level=info msg="shim disconnected" id=262ba1a55ecdc38ae02185a6a149dbf708e03b5f6a6cf08307dbde097646d480 Dec 13 01:54:25.228321 env[1318]: time="2024-12-13T01:54:25.228229148Z" level=warning msg="cleaning up after shim disconnected" id=262ba1a55ecdc38ae02185a6a149dbf708e03b5f6a6cf08307dbde097646d480 namespace=k8s.io Dec 13 01:54:25.228321 env[1318]: time="2024-12-13T01:54:25.228239718Z" level=info msg="cleaning up dead shim" Dec 13 01:54:25.234079 env[1318]: time="2024-12-13T01:54:25.234041743Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2848 runtime=io.containerd.runc.v2\n" Dec 13 01:54:25.441374 kubelet[2228]: I1213 01:54:25.441331 2228 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:54:25.441977 kubelet[2228]: E1213 01:54:25.441945 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:25.442554 kubelet[2228]: E1213 01:54:25.442523 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:25.443413 env[1318]: time="2024-12-13T01:54:25.443362641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:54:25.446032 systemd[1]: run-containerd-runc-k8s.io-262ba1a55ecdc38ae02185a6a149dbf708e03b5f6a6cf08307dbde097646d480-runc.PAr6F8.mount: Deactivated successfully. Dec 13 01:54:25.446184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-262ba1a55ecdc38ae02185a6a149dbf708e03b5f6a6cf08307dbde097646d480-rootfs.mount: Deactivated successfully. Dec 13 01:54:26.396328 kubelet[2228]: E1213 01:54:26.396290 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t2vq9" podUID="7369f4a7-4a25-4cba-bc4e-08b9ad330777" Dec 13 01:54:28.396810 kubelet[2228]: E1213 01:54:28.396747 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t2vq9" podUID="7369f4a7-4a25-4cba-bc4e-08b9ad330777" Dec 13 01:54:30.138518 env[1318]: time="2024-12-13T01:54:30.138447178Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:30.140830 env[1318]: time="2024-12-13T01:54:30.140763448Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:30.142476 env[1318]: time="2024-12-13T01:54:30.142447027Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:30.144107 env[1318]: time="2024-12-13T01:54:30.144057919Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:30.144655 env[1318]: time="2024-12-13T01:54:30.144610970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:54:30.146778 env[1318]: time="2024-12-13T01:54:30.146741981Z" level=info msg="CreateContainer within sandbox \"d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:54:30.162980 env[1318]: time="2024-12-13T01:54:30.162920483Z" level=info msg="CreateContainer within sandbox \"d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"185971a1ca55da444e9944bf75667ec7e77df5385d68c6b258d3ad1822966fcc\"" Dec 13 01:54:30.163728 env[1318]: time="2024-12-13T01:54:30.163493582Z" level=info msg="StartContainer for \"185971a1ca55da444e9944bf75667ec7e77df5385d68c6b258d3ad1822966fcc\"" Dec 13 01:54:30.224018 env[1318]: time="2024-12-13T01:54:30.223968581Z" level=info msg="StartContainer for \"185971a1ca55da444e9944bf75667ec7e77df5385d68c6b258d3ad1822966fcc\" returns successfully" Dec 13 01:54:30.396575 kubelet[2228]: E1213 01:54:30.396463 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t2vq9" podUID="7369f4a7-4a25-4cba-bc4e-08b9ad330777" Dec 13 01:54:30.450002 kubelet[2228]: E1213 01:54:30.449974 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:31.373155 systemd[1]: Started sshd@7-10.0.0.88:22-10.0.0.1:52594.service. Dec 13 01:54:31.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.88:22-10.0.0.1:52594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:31.374406 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 13 01:54:31.374475 kernel: audit: type=1130 audit(1734054871.373:272): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.88:22-10.0.0.1:52594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:31.451374 kubelet[2228]: E1213 01:54:31.451351 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:31.850000 audit[2907]: USER_ACCT pid=2907 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:31.851160 sshd[2907]: Accepted publickey for core from 10.0.0.1 port 52594 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:54:31.871698 kernel: audit: type=1101 audit(1734054871.850:273): pid=2907 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:31.871817 kernel: audit: type=1103 audit(1734054871.867:274): pid=2907 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:31.867000 audit[2907]: CRED_ACQ pid=2907 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:31.868615 sshd[2907]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:54:31.874394 kernel: audit: type=1006 audit(1734054871.867:275): pid=2907 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 13 01:54:31.867000 audit[2907]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0f3c4a80 a2=3 a3=0 items=0 ppid=1 pid=2907 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:31.876246 systemd-logind[1304]: New session 8 of user core. Dec 13 01:54:31.876950 systemd[1]: Started session-8.scope. Dec 13 01:54:31.878594 kernel: audit: type=1300 audit(1734054871.867:275): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0f3c4a80 a2=3 a3=0 items=0 ppid=1 pid=2907 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:31.867000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:54:31.883306 kernel: audit: type=1327 audit(1734054871.867:275): proctitle=737368643A20636F7265205B707269765D Dec 13 01:54:31.884000 audit[2907]: USER_START pid=2907 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:31.895401 kernel: audit: type=1105 audit(1734054871.884:276): pid=2907 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:31.895561 kernel: audit: type=1103 audit(1734054871.884:277): pid=2910 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:31.884000 audit[2910]: CRED_ACQ pid=2910 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:32.221184 sshd[2907]: pam_unix(sshd:session): session closed for user core Dec 13 01:54:32.221000 audit[2907]: USER_END pid=2907 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:32.223749 systemd[1]: sshd@7-10.0.0.88:22-10.0.0.1:52594.service: Deactivated successfully. Dec 13 01:54:32.224816 systemd-logind[1304]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:54:32.224846 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:54:32.225832 systemd-logind[1304]: Removed session 8. Dec 13 01:54:32.222000 audit[2907]: CRED_DISP pid=2907 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:32.231024 kernel: audit: type=1106 audit(1734054872.221:278): pid=2907 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:32.231087 kernel: audit: type=1104 audit(1734054872.222:279): pid=2907 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:32.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.88:22-10.0.0.1:52594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:32.352059 env[1318]: time="2024-12-13T01:54:32.351993046Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:54:32.368525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-185971a1ca55da444e9944bf75667ec7e77df5385d68c6b258d3ad1822966fcc-rootfs.mount: Deactivated successfully. Dec 13 01:54:32.378175 env[1318]: time="2024-12-13T01:54:32.378124051Z" level=info msg="shim disconnected" id=185971a1ca55da444e9944bf75667ec7e77df5385d68c6b258d3ad1822966fcc Dec 13 01:54:32.378363 env[1318]: time="2024-12-13T01:54:32.378175828Z" level=warning msg="cleaning up after shim disconnected" id=185971a1ca55da444e9944bf75667ec7e77df5385d68c6b258d3ad1822966fcc namespace=k8s.io Dec 13 01:54:32.378363 env[1318]: time="2024-12-13T01:54:32.378187370Z" level=info msg="cleaning up dead shim" Dec 13 01:54:32.384334 env[1318]: time="2024-12-13T01:54:32.384282427Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2939 runtime=io.containerd.runc.v2\n" Dec 13 01:54:32.396082 kubelet[2228]: E1213 01:54:32.396041 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t2vq9" podUID="7369f4a7-4a25-4cba-bc4e-08b9ad330777" Dec 13 01:54:32.437928 kubelet[2228]: I1213 01:54:32.437892 2228 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:54:32.453947 kubelet[2228]: I1213 01:54:32.453912 2228 topology_manager.go:215] "Topology Admit Handler" podUID="03656298-6b0b-422b-a3a9-1c9ae4e861d5" podNamespace="kube-system" podName="coredns-76f75df574-qc46k" Dec 13 01:54:32.456648 kubelet[2228]: I1213 01:54:32.456623 2228 topology_manager.go:215] "Topology Admit Handler" podUID="c24c6b5a-d5bf-438a-ad13-509ca76dd573" podNamespace="kube-system" podName="coredns-76f75df574-q9qs2" Dec 13 01:54:32.457328 kubelet[2228]: E1213 01:54:32.457311 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:32.459536 env[1318]: time="2024-12-13T01:54:32.459499677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:54:32.459784 kubelet[2228]: I1213 01:54:32.459761 2228 topology_manager.go:215] "Topology Admit Handler" podUID="2e3c9f76-8cdf-4757-acdc-92eda3454b96" podNamespace="calico-apiserver" podName="calico-apiserver-7f458bd975-jfg7r" Dec 13 01:54:32.460103 kubelet[2228]: I1213 01:54:32.460083 2228 topology_manager.go:215] "Topology Admit Handler" podUID="210d3ffb-3280-4c87-8159-32ab42140bc1" podNamespace="calico-system" podName="calico-kube-controllers-5784655f99-jwwjd" Dec 13 01:54:32.461064 kubelet[2228]: I1213 01:54:32.461025 2228 topology_manager.go:215] "Topology Admit Handler" podUID="57931067-2814-4ccb-9fc1-1f61db24c542" podNamespace="calico-apiserver" podName="calico-apiserver-7f458bd975-shd8j" Dec 13 01:54:32.576663 kubelet[2228]: I1213 01:54:32.576619 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03656298-6b0b-422b-a3a9-1c9ae4e861d5-config-volume\") pod \"coredns-76f75df574-qc46k\" (UID: \"03656298-6b0b-422b-a3a9-1c9ae4e861d5\") " pod="kube-system/coredns-76f75df574-qc46k" Dec 13 01:54:32.576829 kubelet[2228]: I1213 01:54:32.576676 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/57931067-2814-4ccb-9fc1-1f61db24c542-calico-apiserver-certs\") pod \"calico-apiserver-7f458bd975-shd8j\" (UID: \"57931067-2814-4ccb-9fc1-1f61db24c542\") " pod="calico-apiserver/calico-apiserver-7f458bd975-shd8j" Dec 13 01:54:32.576829 kubelet[2228]: I1213 01:54:32.576697 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/210d3ffb-3280-4c87-8159-32ab42140bc1-tigera-ca-bundle\") pod \"calico-kube-controllers-5784655f99-jwwjd\" (UID: \"210d3ffb-3280-4c87-8159-32ab42140bc1\") " pod="calico-system/calico-kube-controllers-5784655f99-jwwjd" Dec 13 01:54:32.576915 kubelet[2228]: I1213 01:54:32.576872 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c24c6b5a-d5bf-438a-ad13-509ca76dd573-config-volume\") pod \"coredns-76f75df574-q9qs2\" (UID: \"c24c6b5a-d5bf-438a-ad13-509ca76dd573\") " pod="kube-system/coredns-76f75df574-q9qs2" Dec 13 01:54:32.576954 kubelet[2228]: I1213 01:54:32.576938 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf7jb\" (UniqueName: \"kubernetes.io/projected/2e3c9f76-8cdf-4757-acdc-92eda3454b96-kube-api-access-jf7jb\") pod \"calico-apiserver-7f458bd975-jfg7r\" (UID: \"2e3c9f76-8cdf-4757-acdc-92eda3454b96\") " pod="calico-apiserver/calico-apiserver-7f458bd975-jfg7r" Dec 13 01:54:32.576994 kubelet[2228]: I1213 01:54:32.576966 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2e3c9f76-8cdf-4757-acdc-92eda3454b96-calico-apiserver-certs\") pod \"calico-apiserver-7f458bd975-jfg7r\" (UID: \"2e3c9f76-8cdf-4757-acdc-92eda3454b96\") " pod="calico-apiserver/calico-apiserver-7f458bd975-jfg7r" Dec 13 01:54:32.576994 kubelet[2228]: I1213 01:54:32.576985 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bwhc\" (UniqueName: \"kubernetes.io/projected/57931067-2814-4ccb-9fc1-1f61db24c542-kube-api-access-8bwhc\") pod \"calico-apiserver-7f458bd975-shd8j\" (UID: \"57931067-2814-4ccb-9fc1-1f61db24c542\") " pod="calico-apiserver/calico-apiserver-7f458bd975-shd8j" Dec 13 01:54:32.577064 kubelet[2228]: I1213 01:54:32.577026 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvrz7\" (UniqueName: \"kubernetes.io/projected/c24c6b5a-d5bf-438a-ad13-509ca76dd573-kube-api-access-lvrz7\") pod \"coredns-76f75df574-q9qs2\" (UID: \"c24c6b5a-d5bf-438a-ad13-509ca76dd573\") " pod="kube-system/coredns-76f75df574-q9qs2" Dec 13 01:54:32.577104 kubelet[2228]: I1213 01:54:32.577092 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwcs2\" (UniqueName: \"kubernetes.io/projected/03656298-6b0b-422b-a3a9-1c9ae4e861d5-kube-api-access-kwcs2\") pod \"coredns-76f75df574-qc46k\" (UID: \"03656298-6b0b-422b-a3a9-1c9ae4e861d5\") " pod="kube-system/coredns-76f75df574-qc46k" Dec 13 01:54:32.577142 kubelet[2228]: I1213 01:54:32.577117 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pt49\" (UniqueName: \"kubernetes.io/projected/210d3ffb-3280-4c87-8159-32ab42140bc1-kube-api-access-5pt49\") pod \"calico-kube-controllers-5784655f99-jwwjd\" (UID: \"210d3ffb-3280-4c87-8159-32ab42140bc1\") " pod="calico-system/calico-kube-controllers-5784655f99-jwwjd" Dec 13 01:54:32.755905 kubelet[2228]: E1213 01:54:32.755870 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:32.756406 env[1318]: time="2024-12-13T01:54:32.756354759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qc46k,Uid:03656298-6b0b-422b-a3a9-1c9ae4e861d5,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:32.758790 kubelet[2228]: E1213 01:54:32.758765 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:32.760263 env[1318]: time="2024-12-13T01:54:32.759554660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q9qs2,Uid:c24c6b5a-d5bf-438a-ad13-509ca76dd573,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:32.765223 env[1318]: time="2024-12-13T01:54:32.765194881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f458bd975-jfg7r,Uid:2e3c9f76-8cdf-4757-acdc-92eda3454b96,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:54:32.770726 env[1318]: time="2024-12-13T01:54:32.770684981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5784655f99-jwwjd,Uid:210d3ffb-3280-4c87-8159-32ab42140bc1,Namespace:calico-system,Attempt:0,}" Dec 13 01:54:32.770881 env[1318]: time="2024-12-13T01:54:32.770846114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f458bd975-shd8j,Uid:57931067-2814-4ccb-9fc1-1f61db24c542,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:54:32.858178 env[1318]: time="2024-12-13T01:54:32.858003658Z" level=error msg="Failed to destroy network for sandbox \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.858990 env[1318]: time="2024-12-13T01:54:32.858882121Z" level=error msg="encountered an error cleaning up failed sandbox \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.858990 env[1318]: time="2024-12-13T01:54:32.858929690Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qc46k,Uid:03656298-6b0b-422b-a3a9-1c9ae4e861d5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.859571 kubelet[2228]: E1213 01:54:32.859239 2228 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.859571 kubelet[2228]: E1213 01:54:32.859330 2228 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qc46k" Dec 13 01:54:32.859571 kubelet[2228]: E1213 01:54:32.859354 2228 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qc46k" Dec 13 01:54:32.859694 kubelet[2228]: E1213 01:54:32.859405 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qc46k_kube-system(03656298-6b0b-422b-a3a9-1c9ae4e861d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qc46k_kube-system(03656298-6b0b-422b-a3a9-1c9ae4e861d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qc46k" podUID="03656298-6b0b-422b-a3a9-1c9ae4e861d5" Dec 13 01:54:32.862580 env[1318]: time="2024-12-13T01:54:32.862474460Z" level=error msg="Failed to destroy network for sandbox \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.862973 env[1318]: time="2024-12-13T01:54:32.862928002Z" level=error msg="encountered an error cleaning up failed sandbox \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.863030 env[1318]: time="2024-12-13T01:54:32.862986663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q9qs2,Uid:c24c6b5a-d5bf-438a-ad13-509ca76dd573,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.863275 kubelet[2228]: E1213 01:54:32.863220 2228 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.863347 kubelet[2228]: E1213 01:54:32.863319 2228 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-q9qs2" Dec 13 01:54:32.863347 kubelet[2228]: E1213 01:54:32.863344 2228 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-q9qs2" Dec 13 01:54:32.863421 kubelet[2228]: E1213 01:54:32.863399 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-q9qs2_kube-system(c24c6b5a-d5bf-438a-ad13-509ca76dd573)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-q9qs2_kube-system(c24c6b5a-d5bf-438a-ad13-509ca76dd573)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q9qs2" podUID="c24c6b5a-d5bf-438a-ad13-509ca76dd573" Dec 13 01:54:32.886286 env[1318]: time="2024-12-13T01:54:32.886205950Z" level=error msg="Failed to destroy network for sandbox \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.886771 env[1318]: time="2024-12-13T01:54:32.886746096Z" level=error msg="encountered an error cleaning up failed sandbox \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.886883 env[1318]: time="2024-12-13T01:54:32.886854079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5784655f99-jwwjd,Uid:210d3ffb-3280-4c87-8159-32ab42140bc1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.887531 kubelet[2228]: E1213 01:54:32.887178 2228 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.887531 kubelet[2228]: E1213 01:54:32.887229 2228 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5784655f99-jwwjd" Dec 13 01:54:32.887531 kubelet[2228]: E1213 01:54:32.887249 2228 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5784655f99-jwwjd" Dec 13 01:54:32.887690 kubelet[2228]: E1213 01:54:32.887327 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5784655f99-jwwjd_calico-system(210d3ffb-3280-4c87-8159-32ab42140bc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5784655f99-jwwjd_calico-system(210d3ffb-3280-4c87-8159-32ab42140bc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5784655f99-jwwjd" podUID="210d3ffb-3280-4c87-8159-32ab42140bc1" Dec 13 01:54:32.892744 env[1318]: time="2024-12-13T01:54:32.892671354Z" level=error msg="Failed to destroy network for sandbox \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.893104 env[1318]: time="2024-12-13T01:54:32.893070865Z" level=error msg="encountered an error cleaning up failed sandbox \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.893227 env[1318]: time="2024-12-13T01:54:32.893181875Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f458bd975-jfg7r,Uid:2e3c9f76-8cdf-4757-acdc-92eda3454b96,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.893486 kubelet[2228]: E1213 01:54:32.893428 2228 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.893542 kubelet[2228]: E1213 01:54:32.893494 2228 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f458bd975-jfg7r" Dec 13 01:54:32.893542 kubelet[2228]: E1213 01:54:32.893526 2228 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f458bd975-jfg7r" Dec 13 01:54:32.893627 kubelet[2228]: E1213 01:54:32.893587 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f458bd975-jfg7r_calico-apiserver(2e3c9f76-8cdf-4757-acdc-92eda3454b96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f458bd975-jfg7r_calico-apiserver(2e3c9f76-8cdf-4757-acdc-92eda3454b96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f458bd975-jfg7r" podUID="2e3c9f76-8cdf-4757-acdc-92eda3454b96" Dec 13 01:54:32.904292 env[1318]: time="2024-12-13T01:54:32.904209783Z" level=error msg="Failed to destroy network for sandbox \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.904595 env[1318]: time="2024-12-13T01:54:32.904555203Z" level=error msg="encountered an error cleaning up failed sandbox \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.904663 env[1318]: time="2024-12-13T01:54:32.904620446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f458bd975-shd8j,Uid:57931067-2814-4ccb-9fc1-1f61db24c542,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.904923 kubelet[2228]: E1213 01:54:32.904892 2228 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:32.904992 kubelet[2228]: E1213 01:54:32.904946 2228 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f458bd975-shd8j" Dec 13 01:54:32.904992 kubelet[2228]: E1213 01:54:32.904970 2228 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f458bd975-shd8j" Dec 13 01:54:32.905050 kubelet[2228]: E1213 01:54:32.905029 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f458bd975-shd8j_calico-apiserver(57931067-2814-4ccb-9fc1-1f61db24c542)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f458bd975-shd8j_calico-apiserver(57931067-2814-4ccb-9fc1-1f61db24c542)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f458bd975-shd8j" podUID="57931067-2814-4ccb-9fc1-1f61db24c542" Dec 13 01:54:33.459428 kubelet[2228]: I1213 01:54:33.459395 2228 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" Dec 13 01:54:33.461108 env[1318]: time="2024-12-13T01:54:33.460019165Z" level=info msg="StopPodSandbox for \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\"" Dec 13 01:54:33.461348 kubelet[2228]: I1213 01:54:33.460063 2228 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" Dec 13 01:54:33.461645 env[1318]: time="2024-12-13T01:54:33.461616460Z" level=info msg="StopPodSandbox for \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\"" Dec 13 01:54:33.462709 kubelet[2228]: I1213 01:54:33.462668 2228 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" Dec 13 01:54:33.463161 env[1318]: time="2024-12-13T01:54:33.463138503Z" level=info msg="StopPodSandbox for \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\"" Dec 13 01:54:33.466432 kubelet[2228]: I1213 01:54:33.465885 2228 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" Dec 13 01:54:33.466787 env[1318]: time="2024-12-13T01:54:33.466756359Z" level=info msg="StopPodSandbox for \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\"" Dec 13 01:54:33.467684 kubelet[2228]: I1213 01:54:33.467661 2228 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" Dec 13 01:54:33.468308 env[1318]: time="2024-12-13T01:54:33.468263383Z" level=info msg="StopPodSandbox for \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\"" Dec 13 01:54:33.507438 env[1318]: time="2024-12-13T01:54:33.507385295Z" level=error msg="StopPodSandbox for \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\" failed" error="failed to destroy network for sandbox \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:33.507871 kubelet[2228]: E1213 01:54:33.507838 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" Dec 13 01:54:33.507956 kubelet[2228]: E1213 01:54:33.507935 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747"} Dec 13 01:54:33.508004 kubelet[2228]: E1213 01:54:33.507977 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"210d3ffb-3280-4c87-8159-32ab42140bc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:54:33.508077 kubelet[2228]: E1213 01:54:33.508005 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"210d3ffb-3280-4c87-8159-32ab42140bc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5784655f99-jwwjd" podUID="210d3ffb-3280-4c87-8159-32ab42140bc1" Dec 13 01:54:33.511196 env[1318]: time="2024-12-13T01:54:33.511167641Z" level=error msg="StopPodSandbox for \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\" failed" error="failed to destroy network for sandbox \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:33.511438 kubelet[2228]: E1213 01:54:33.511417 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" Dec 13 01:54:33.511493 kubelet[2228]: E1213 01:54:33.511444 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3"} Dec 13 01:54:33.511493 kubelet[2228]: E1213 01:54:33.511470 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e3c9f76-8cdf-4757-acdc-92eda3454b96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:54:33.511493 kubelet[2228]: E1213 01:54:33.511492 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e3c9f76-8cdf-4757-acdc-92eda3454b96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f458bd975-jfg7r" podUID="2e3c9f76-8cdf-4757-acdc-92eda3454b96" Dec 13 01:54:33.518023 env[1318]: time="2024-12-13T01:54:33.517984645Z" level=error msg="StopPodSandbox for \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\" failed" error="failed to destroy network for sandbox \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:33.518237 kubelet[2228]: E1213 01:54:33.518204 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" Dec 13 01:54:33.518311 kubelet[2228]: E1213 01:54:33.518259 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b"} Dec 13 01:54:33.518343 kubelet[2228]: E1213 01:54:33.518308 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c24c6b5a-d5bf-438a-ad13-509ca76dd573\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:54:33.518391 kubelet[2228]: E1213 01:54:33.518342 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c24c6b5a-d5bf-438a-ad13-509ca76dd573\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q9qs2" podUID="c24c6b5a-d5bf-438a-ad13-509ca76dd573" Dec 13 01:54:33.522262 env[1318]: time="2024-12-13T01:54:33.522187670Z" level=error msg="StopPodSandbox for \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\" failed" error="failed to destroy network for sandbox \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:33.522434 kubelet[2228]: E1213 01:54:33.522412 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" Dec 13 01:54:33.522434 kubelet[2228]: E1213 01:54:33.522436 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5"} Dec 13 01:54:33.522513 kubelet[2228]: E1213 01:54:33.522460 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03656298-6b0b-422b-a3a9-1c9ae4e861d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:54:33.522513 kubelet[2228]: E1213 01:54:33.522482 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03656298-6b0b-422b-a3a9-1c9ae4e861d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qc46k" podUID="03656298-6b0b-422b-a3a9-1c9ae4e861d5" Dec 13 01:54:33.524776 env[1318]: time="2024-12-13T01:54:33.524725384Z" level=error msg="StopPodSandbox for \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\" failed" error="failed to destroy network for sandbox \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:33.524965 kubelet[2228]: E1213 01:54:33.524944 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" Dec 13 01:54:33.524965 kubelet[2228]: E1213 01:54:33.524966 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11"} Dec 13 01:54:33.525067 kubelet[2228]: E1213 01:54:33.524994 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57931067-2814-4ccb-9fc1-1f61db24c542\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:54:33.525067 kubelet[2228]: E1213 01:54:33.525015 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57931067-2814-4ccb-9fc1-1f61db24c542\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f458bd975-shd8j" podUID="57931067-2814-4ccb-9fc1-1f61db24c542" Dec 13 01:54:34.398496 env[1318]: time="2024-12-13T01:54:34.398453296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t2vq9,Uid:7369f4a7-4a25-4cba-bc4e-08b9ad330777,Namespace:calico-system,Attempt:0,}" Dec 13 01:54:34.827995 env[1318]: time="2024-12-13T01:54:34.823357713Z" level=error msg="Failed to destroy network for sandbox \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:34.827995 env[1318]: time="2024-12-13T01:54:34.823741515Z" level=error msg="encountered an error cleaning up failed sandbox \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:34.827995 env[1318]: time="2024-12-13T01:54:34.823794083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t2vq9,Uid:7369f4a7-4a25-4cba-bc4e-08b9ad330777,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:34.827052 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef-shm.mount: Deactivated successfully. Dec 13 01:54:34.828583 kubelet[2228]: E1213 01:54:34.824066 2228 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:34.828583 kubelet[2228]: E1213 01:54:34.824132 2228 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t2vq9" Dec 13 01:54:34.828583 kubelet[2228]: E1213 01:54:34.824157 2228 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t2vq9" Dec 13 01:54:34.828847 kubelet[2228]: E1213 01:54:34.824224 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t2vq9_calico-system(7369f4a7-4a25-4cba-bc4e-08b9ad330777)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t2vq9_calico-system(7369f4a7-4a25-4cba-bc4e-08b9ad330777)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t2vq9" podUID="7369f4a7-4a25-4cba-bc4e-08b9ad330777" Dec 13 01:54:35.471467 kubelet[2228]: I1213 01:54:35.471430 2228 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" Dec 13 01:54:35.472054 env[1318]: time="2024-12-13T01:54:35.472010240Z" level=info msg="StopPodSandbox for \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\"" Dec 13 01:54:35.493146 env[1318]: time="2024-12-13T01:54:35.493087716Z" level=error msg="StopPodSandbox for \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\" failed" error="failed to destroy network for sandbox \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:35.493384 kubelet[2228]: E1213 01:54:35.493357 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" Dec 13 01:54:35.493456 kubelet[2228]: E1213 01:54:35.493409 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef"} Dec 13 01:54:35.493456 kubelet[2228]: E1213 01:54:35.493455 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7369f4a7-4a25-4cba-bc4e-08b9ad330777\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:54:35.493578 kubelet[2228]: E1213 01:54:35.493490 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7369f4a7-4a25-4cba-bc4e-08b9ad330777\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t2vq9" podUID="7369f4a7-4a25-4cba-bc4e-08b9ad330777" Dec 13 01:54:37.223555 systemd[1]: Started sshd@8-10.0.0.88:22-10.0.0.1:45024.service. Dec 13 01:54:37.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.88:22-10.0.0.1:45024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:37.224561 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 01:54:37.224593 kernel: audit: type=1130 audit(1734054877.223:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.88:22-10.0.0.1:45024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:37.258000 audit[3328]: USER_ACCT pid=3328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:37.259013 sshd[3328]: Accepted publickey for core from 10.0.0.1 port 45024 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:54:37.262000 audit[3328]: CRED_ACQ pid=3328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:37.263291 sshd[3328]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:54:37.266669 kernel: audit: type=1101 audit(1734054877.258:282): pid=3328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:37.266782 kernel: audit: type=1103 audit(1734054877.262:283): pid=3328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:37.266802 kernel: audit: type=1006 audit(1734054877.262:284): pid=3328 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 13 01:54:37.269172 kernel: audit: type=1300 audit(1734054877.262:284): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdaec0ba90 a2=3 a3=0 items=0 ppid=1 pid=3328 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:37.262000 audit[3328]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdaec0ba90 a2=3 a3=0 items=0 ppid=1 pid=3328 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:37.273698 kernel: audit: type=1327 audit(1734054877.262:284): proctitle=737368643A20636F7265205B707269765D Dec 13 01:54:37.262000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:54:37.276974 systemd-logind[1304]: New session 9 of user core. Dec 13 01:54:37.277661 systemd[1]: Started session-9.scope. Dec 13 01:54:37.281000 audit[3328]: USER_START pid=3328 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:37.282000 audit[3331]: CRED_ACQ pid=3331 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:37.289574 kernel: audit: type=1105 audit(1734054877.281:285): pid=3328 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:37.289605 kernel: audit: type=1103 audit(1734054877.282:286): pid=3331 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:37.392377 sshd[3328]: pam_unix(sshd:session): session closed for user core Dec 13 01:54:37.392000 audit[3328]: USER_END pid=3328 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:37.394741 systemd[1]: sshd@8-10.0.0.88:22-10.0.0.1:45024.service: Deactivated successfully. Dec 13 01:54:37.395761 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:54:37.396095 systemd-logind[1304]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:54:37.393000 audit[3328]: CRED_DISP pid=3328 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:37.399156 systemd-logind[1304]: Removed session 9. Dec 13 01:54:37.401258 kernel: audit: type=1106 audit(1734054877.392:287): pid=3328 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:37.402473 kernel: audit: type=1104 audit(1734054877.393:288): pid=3328 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:37.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.88:22-10.0.0.1:45024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:38.382368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073250477.mount: Deactivated successfully. Dec 13 01:54:38.741932 env[1318]: time="2024-12-13T01:54:38.741802554Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:38.744240 env[1318]: time="2024-12-13T01:54:38.744185083Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:38.745808 env[1318]: time="2024-12-13T01:54:38.745770853Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:38.747350 env[1318]: time="2024-12-13T01:54:38.747317801Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:38.747765 env[1318]: time="2024-12-13T01:54:38.747717051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:54:38.756307 env[1318]: time="2024-12-13T01:54:38.756234191Z" level=info msg="CreateContainer within sandbox \"d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:54:38.774936 env[1318]: time="2024-12-13T01:54:38.774893353Z" level=info msg="CreateContainer within sandbox \"d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4c9dee64263d00d4ef6a5beb83fda304b7a35e847c774a241ea3f40d72e062b3\"" Dec 13 01:54:38.775849 env[1318]: time="2024-12-13T01:54:38.775796920Z" level=info msg="StartContainer for \"4c9dee64263d00d4ef6a5beb83fda304b7a35e847c774a241ea3f40d72e062b3\"" Dec 13 01:54:38.824844 env[1318]: time="2024-12-13T01:54:38.824785841Z" level=info msg="StartContainer for \"4c9dee64263d00d4ef6a5beb83fda304b7a35e847c774a241ea3f40d72e062b3\" returns successfully" Dec 13 01:54:38.893062 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:54:38.893219 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:54:39.140824 env[1318]: time="2024-12-13T01:54:39.140767192Z" level=info msg="shim disconnected" id=4c9dee64263d00d4ef6a5beb83fda304b7a35e847c774a241ea3f40d72e062b3 Dec 13 01:54:39.140824 env[1318]: time="2024-12-13T01:54:39.140822195Z" level=warning msg="cleaning up after shim disconnected" id=4c9dee64263d00d4ef6a5beb83fda304b7a35e847c774a241ea3f40d72e062b3 namespace=k8s.io Dec 13 01:54:39.140824 env[1318]: time="2024-12-13T01:54:39.140830801Z" level=info msg="cleaning up dead shim" Dec 13 01:54:39.147089 env[1318]: time="2024-12-13T01:54:39.147040271Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3406 runtime=io.containerd.runc.v2\n" Dec 13 01:54:39.480179 kubelet[2228]: I1213 01:54:39.480076 2228 scope.go:117] "RemoveContainer" containerID="4c9dee64263d00d4ef6a5beb83fda304b7a35e847c774a241ea3f40d72e062b3" Dec 13 01:54:39.480179 kubelet[2228]: E1213 01:54:39.480158 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:39.484228 env[1318]: time="2024-12-13T01:54:39.484178451Z" level=info msg="CreateContainer within sandbox \"d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Dec 13 01:54:39.503926 env[1318]: time="2024-12-13T01:54:39.503875939Z" level=info msg="CreateContainer within sandbox \"d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"b14fa0042d56264c1f5cb24fc020d3f22c20e5321d246aaa8af8a91b0d909939\"" Dec 13 01:54:39.504731 env[1318]: time="2024-12-13T01:54:39.504709446Z" level=info msg="StartContainer for \"b14fa0042d56264c1f5cb24fc020d3f22c20e5321d246aaa8af8a91b0d909939\"" Dec 13 01:54:39.547596 env[1318]: time="2024-12-13T01:54:39.547557113Z" level=info msg="StartContainer for \"b14fa0042d56264c1f5cb24fc020d3f22c20e5321d246aaa8af8a91b0d909939\" returns successfully" Dec 13 01:54:39.609973 env[1318]: time="2024-12-13T01:54:39.609926458Z" level=info msg="shim disconnected" id=b14fa0042d56264c1f5cb24fc020d3f22c20e5321d246aaa8af8a91b0d909939 Dec 13 01:54:39.609973 env[1318]: time="2024-12-13T01:54:39.609972264Z" level=warning msg="cleaning up after shim disconnected" id=b14fa0042d56264c1f5cb24fc020d3f22c20e5321d246aaa8af8a91b0d909939 namespace=k8s.io Dec 13 01:54:39.609973 env[1318]: time="2024-12-13T01:54:39.609981321Z" level=info msg="cleaning up dead shim" Dec 13 01:54:39.615746 env[1318]: time="2024-12-13T01:54:39.615698105Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3470 runtime=io.containerd.runc.v2\n" Dec 13 01:54:40.382509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b14fa0042d56264c1f5cb24fc020d3f22c20e5321d246aaa8af8a91b0d909939-rootfs.mount: Deactivated successfully. Dec 13 01:54:40.483654 kubelet[2228]: I1213 01:54:40.483567 2228 scope.go:117] "RemoveContainer" containerID="4c9dee64263d00d4ef6a5beb83fda304b7a35e847c774a241ea3f40d72e062b3" Dec 13 01:54:40.484098 kubelet[2228]: I1213 01:54:40.483840 2228 scope.go:117] "RemoveContainer" containerID="b14fa0042d56264c1f5cb24fc020d3f22c20e5321d246aaa8af8a91b0d909939" Dec 13 01:54:40.484098 kubelet[2228]: E1213 01:54:40.483900 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:40.484505 kubelet[2228]: E1213 01:54:40.484474 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-7wdql_calico-system(1700d6cc-17fc-42bf-b164-298c2c341d88)\"" pod="calico-system/calico-node-7wdql" podUID="1700d6cc-17fc-42bf-b164-298c2c341d88" Dec 13 01:54:40.484878 env[1318]: time="2024-12-13T01:54:40.484844817Z" level=info msg="RemoveContainer for \"4c9dee64263d00d4ef6a5beb83fda304b7a35e847c774a241ea3f40d72e062b3\"" Dec 13 01:54:40.636025 env[1318]: time="2024-12-13T01:54:40.635889753Z" level=info msg="RemoveContainer for \"4c9dee64263d00d4ef6a5beb83fda304b7a35e847c774a241ea3f40d72e062b3\" returns successfully" Dec 13 01:54:42.396154 systemd[1]: Started sshd@9-10.0.0.88:22-10.0.0.1:45032.service. Dec 13 01:54:42.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.88:22-10.0.0.1:45032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:42.397590 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 01:54:42.397684 kernel: audit: type=1130 audit(1734054882.396:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.88:22-10.0.0.1:45032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:42.432000 audit[3482]: USER_ACCT pid=3482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:42.432523 sshd[3482]: Accepted publickey for core from 10.0.0.1 port 45032 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:54:42.434528 sshd[3482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:54:42.433000 audit[3482]: CRED_ACQ pid=3482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:42.438332 systemd-logind[1304]: New session 10 of user core. Dec 13 01:54:42.439488 systemd[1]: Started session-10.scope. Dec 13 01:54:42.440414 kernel: audit: type=1101 audit(1734054882.432:291): pid=3482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:42.440469 kernel: audit: type=1103 audit(1734054882.433:292): pid=3482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:42.440497 kernel: audit: type=1006 audit(1734054882.433:293): pid=3482 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 13 01:54:42.433000 audit[3482]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff780b9540 a2=3 a3=0 items=0 ppid=1 pid=3482 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:42.447129 kernel: audit: type=1300 audit(1734054882.433:293): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff780b9540 a2=3 a3=0 items=0 ppid=1 pid=3482 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:42.447213 kernel: audit: type=1327 audit(1734054882.433:293): proctitle=737368643A20636F7265205B707269765D Dec 13 01:54:42.433000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:54:42.448500 kernel: audit: type=1105 audit(1734054882.444:294): pid=3482 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:42.444000 audit[3482]: USER_START pid=3482 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:42.446000 audit[3485]: CRED_ACQ pid=3485 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:42.457599 kernel: audit: type=1103 audit(1734054882.446:295): pid=3485 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:42.557467 sshd[3482]: pam_unix(sshd:session): session closed for user core Dec 13 01:54:42.558000 audit[3482]: USER_END pid=3482 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:42.559840 systemd[1]: sshd@9-10.0.0.88:22-10.0.0.1:45032.service: Deactivated successfully. Dec 13 01:54:42.560840 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:54:42.561297 systemd-logind[1304]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:54:42.562051 systemd-logind[1304]: Removed session 10. Dec 13 01:54:42.558000 audit[3482]: CRED_DISP pid=3482 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:42.566178 kernel: audit: type=1106 audit(1734054882.558:296): pid=3482 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:42.566261 kernel: audit: type=1104 audit(1734054882.558:297): pid=3482 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:42.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.88:22-10.0.0.1:45032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:44.396478 env[1318]: time="2024-12-13T01:54:44.396424676Z" level=info msg="StopPodSandbox for \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\"" Dec 13 01:54:44.419467 env[1318]: time="2024-12-13T01:54:44.419404610Z" level=error msg="StopPodSandbox for \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\" failed" error="failed to destroy network for sandbox \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:44.419691 kubelet[2228]: E1213 01:54:44.419662 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" Dec 13 01:54:44.419936 kubelet[2228]: E1213 01:54:44.419715 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11"} Dec 13 01:54:44.419936 kubelet[2228]: E1213 01:54:44.419750 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57931067-2814-4ccb-9fc1-1f61db24c542\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:54:44.419936 kubelet[2228]: E1213 01:54:44.419779 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57931067-2814-4ccb-9fc1-1f61db24c542\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f458bd975-shd8j" podUID="57931067-2814-4ccb-9fc1-1f61db24c542" Dec 13 01:54:45.178747 kubelet[2228]: I1213 01:54:45.178699 2228 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:54:45.179370 kubelet[2228]: E1213 01:54:45.179356 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:45.257000 audit[3521]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=3521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:54:45.257000 audit[3521]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffca26beaa0 a2=0 a3=7ffca26bea8c items=0 ppid=2407 pid=3521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:45.257000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:54:45.265000 audit[3521]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:54:45.265000 audit[3521]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffca26beaa0 a2=0 a3=7ffca26bea8c items=0 ppid=2407 pid=3521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:45.265000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:54:45.397164 env[1318]: time="2024-12-13T01:54:45.397117617Z" level=info msg="StopPodSandbox for \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\"" Dec 13 01:54:45.423160 env[1318]: time="2024-12-13T01:54:45.423090309Z" level=error msg="StopPodSandbox for \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\" failed" error="failed to destroy network for sandbox \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:45.423394 kubelet[2228]: E1213 01:54:45.423350 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" Dec 13 01:54:45.423394 kubelet[2228]: E1213 01:54:45.423392 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5"} Dec 13 01:54:45.423693 kubelet[2228]: E1213 01:54:45.423426 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03656298-6b0b-422b-a3a9-1c9ae4e861d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:54:45.423693 kubelet[2228]: E1213 01:54:45.423453 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03656298-6b0b-422b-a3a9-1c9ae4e861d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qc46k" podUID="03656298-6b0b-422b-a3a9-1c9ae4e861d5" Dec 13 01:54:45.493883 kubelet[2228]: E1213 01:54:45.493775 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:46.396560 env[1318]: time="2024-12-13T01:54:46.396515956Z" level=info msg="StopPodSandbox for \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\"" Dec 13 01:54:46.396785 env[1318]: time="2024-12-13T01:54:46.396749474Z" level=info msg="StopPodSandbox for \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\"" Dec 13 01:54:46.421034 env[1318]: time="2024-12-13T01:54:46.420969671Z" level=error msg="StopPodSandbox for \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\" failed" error="failed to destroy network for sandbox \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:46.421394 kubelet[2228]: E1213 01:54:46.421224 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" Dec 13 01:54:46.421394 kubelet[2228]: E1213 01:54:46.421293 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747"} Dec 13 01:54:46.421394 kubelet[2228]: E1213 01:54:46.421338 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"210d3ffb-3280-4c87-8159-32ab42140bc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:54:46.421394 kubelet[2228]: E1213 01:54:46.421387 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"210d3ffb-3280-4c87-8159-32ab42140bc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5784655f99-jwwjd" podUID="210d3ffb-3280-4c87-8159-32ab42140bc1" Dec 13 01:54:46.425943 env[1318]: time="2024-12-13T01:54:46.425875073Z" level=error msg="StopPodSandbox for \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\" failed" error="failed to destroy network for sandbox \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:46.426222 kubelet[2228]: E1213 01:54:46.426162 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" Dec 13 01:54:46.426222 kubelet[2228]: E1213 01:54:46.426224 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef"} Dec 13 01:54:46.426722 kubelet[2228]: E1213 01:54:46.426288 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7369f4a7-4a25-4cba-bc4e-08b9ad330777\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:54:46.426722 kubelet[2228]: E1213 01:54:46.426327 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7369f4a7-4a25-4cba-bc4e-08b9ad330777\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t2vq9" podUID="7369f4a7-4a25-4cba-bc4e-08b9ad330777" Dec 13 01:54:47.397143 env[1318]: time="2024-12-13T01:54:47.397098556Z" level=info msg="StopPodSandbox for \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\"" Dec 13 01:54:47.416901 env[1318]: time="2024-12-13T01:54:47.416841653Z" level=error msg="StopPodSandbox for \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\" failed" error="failed to destroy network for sandbox \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:47.417117 kubelet[2228]: E1213 01:54:47.417089 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" Dec 13 01:54:47.417183 kubelet[2228]: E1213 01:54:47.417145 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b"} Dec 13 01:54:47.417221 kubelet[2228]: E1213 01:54:47.417188 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c24c6b5a-d5bf-438a-ad13-509ca76dd573\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:54:47.417324 kubelet[2228]: E1213 01:54:47.417225 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c24c6b5a-d5bf-438a-ad13-509ca76dd573\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q9qs2" podUID="c24c6b5a-d5bf-438a-ad13-509ca76dd573" Dec 13 01:54:47.561021 systemd[1]: Started sshd@10-10.0.0.88:22-10.0.0.1:58292.service. Dec 13 01:54:47.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.88:22-10.0.0.1:58292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:47.562085 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 01:54:47.562135 kernel: audit: type=1130 audit(1734054887.560:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.88:22-10.0.0.1:58292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:47.596000 audit[3620]: USER_ACCT pid=3620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.596726 sshd[3620]: Accepted publickey for core from 10.0.0.1 port 58292 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:54:47.598462 sshd[3620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:54:47.597000 audit[3620]: CRED_ACQ pid=3620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.602006 systemd-logind[1304]: New session 11 of user core. Dec 13 01:54:47.602775 systemd[1]: Started session-11.scope. Dec 13 01:54:47.606081 kernel: audit: type=1101 audit(1734054887.596:302): pid=3620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.606129 kernel: audit: type=1103 audit(1734054887.597:303): pid=3620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.606145 kernel: audit: type=1006 audit(1734054887.597:304): pid=3620 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 13 01:54:47.597000 audit[3620]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0b6feae0 a2=3 a3=0 items=0 ppid=1 pid=3620 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:47.612974 kernel: audit: type=1300 audit(1734054887.597:304): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0b6feae0 a2=3 a3=0 items=0 ppid=1 pid=3620 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:47.613047 kernel: audit: type=1327 audit(1734054887.597:304): proctitle=737368643A20636F7265205B707269765D Dec 13 01:54:47.597000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:54:47.607000 audit[3620]: USER_START pid=3620 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.619003 kernel: audit: type=1105 audit(1734054887.607:305): pid=3620 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.619054 kernel: audit: type=1103 audit(1734054887.608:306): pid=3623 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.608000 audit[3623]: CRED_ACQ pid=3623 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.711819 sshd[3620]: pam_unix(sshd:session): session closed for user core Dec 13 01:54:47.713000 audit[3620]: USER_END pid=3620 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.714654 systemd[1]: Started sshd@11-10.0.0.88:22-10.0.0.1:58294.service. Dec 13 01:54:47.715516 systemd[1]: sshd@10-10.0.0.88:22-10.0.0.1:58292.service: Deactivated successfully. Dec 13 01:54:47.716474 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:54:47.713000 audit[3620]: CRED_DISP pid=3620 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.718524 systemd-logind[1304]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:54:47.719868 systemd-logind[1304]: Removed session 11. Dec 13 01:54:47.721401 kernel: audit: type=1106 audit(1734054887.713:307): pid=3620 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.721450 kernel: audit: type=1104 audit(1734054887.713:308): pid=3620 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.88:22-10.0.0.1:58294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:47.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.88:22-10.0.0.1:58292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:47.750000 audit[3633]: USER_ACCT pid=3633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.750622 sshd[3633]: Accepted publickey for core from 10.0.0.1 port 58294 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:54:47.751000 audit[3633]: CRED_ACQ pid=3633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.751000 audit[3633]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9b994d40 a2=3 a3=0 items=0 ppid=1 pid=3633 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:47.751000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:54:47.751772 sshd[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:54:47.755026 systemd-logind[1304]: New session 12 of user core. Dec 13 01:54:47.755781 systemd[1]: Started session-12.scope. Dec 13 01:54:47.759000 audit[3633]: USER_START pid=3633 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.760000 audit[3638]: CRED_ACQ pid=3638 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.982529 sshd[3633]: pam_unix(sshd:session): session closed for user core Dec 13 01:54:47.984620 systemd[1]: Started sshd@12-10.0.0.88:22-10.0.0.1:58298.service. Dec 13 01:54:47.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.88:22-10.0.0.1:58298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:47.985000 audit[3633]: USER_END pid=3633 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.985000 audit[3633]: CRED_DISP pid=3633 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:47.987411 systemd[1]: sshd@11-10.0.0.88:22-10.0.0.1:58294.service: Deactivated successfully. Dec 13 01:54:47.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.88:22-10.0.0.1:58294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:47.989804 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:54:47.989975 systemd-logind[1304]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:54:47.990883 systemd-logind[1304]: Removed session 12. Dec 13 01:54:48.020000 audit[3646]: USER_ACCT pid=3646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:48.020640 sshd[3646]: Accepted publickey for core from 10.0.0.1 port 58298 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:54:48.021000 audit[3646]: CRED_ACQ pid=3646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:48.021000 audit[3646]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffbf709b70 a2=3 a3=0 items=0 ppid=1 pid=3646 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:48.021000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:54:48.021592 sshd[3646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:54:48.024959 systemd-logind[1304]: New session 13 of user core. Dec 13 01:54:48.025775 systemd[1]: Started session-13.scope. Dec 13 01:54:48.030000 audit[3646]: USER_START pid=3646 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:48.031000 audit[3651]: CRED_ACQ pid=3651 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:48.249509 sshd[3646]: pam_unix(sshd:session): session closed for user core Dec 13 01:54:48.250000 audit[3646]: USER_END pid=3646 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:48.250000 audit[3646]: CRED_DISP pid=3646 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:48.251993 systemd[1]: sshd@12-10.0.0.88:22-10.0.0.1:58298.service: Deactivated successfully. Dec 13 01:54:48.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.88:22-10.0.0.1:58298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:48.252898 systemd-logind[1304]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:54:48.252922 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:54:48.253633 systemd-logind[1304]: Removed session 13. Dec 13 01:54:48.397045 env[1318]: time="2024-12-13T01:54:48.396975871Z" level=info msg="StopPodSandbox for \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\"" Dec 13 01:54:48.419217 env[1318]: time="2024-12-13T01:54:48.419151852Z" level=error msg="StopPodSandbox for \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\" failed" error="failed to destroy network for sandbox \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:48.419502 kubelet[2228]: E1213 01:54:48.419464 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" Dec 13 01:54:48.419841 kubelet[2228]: E1213 01:54:48.419518 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3"} Dec 13 01:54:48.419841 kubelet[2228]: E1213 01:54:48.419563 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e3c9f76-8cdf-4757-acdc-92eda3454b96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:54:48.419841 kubelet[2228]: E1213 01:54:48.419602 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e3c9f76-8cdf-4757-acdc-92eda3454b96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f458bd975-jfg7r" podUID="2e3c9f76-8cdf-4757-acdc-92eda3454b96" Dec 13 01:54:53.252164 systemd[1]: Started sshd@13-10.0.0.88:22-10.0.0.1:58314.service. Dec 13 01:54:53.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.88:22-10.0.0.1:58314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:53.253400 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 01:54:53.253464 kernel: audit: type=1130 audit(1734054893.252:328): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.88:22-10.0.0.1:58314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:53.284000 audit[3685]: USER_ACCT pid=3685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:53.285356 sshd[3685]: Accepted publickey for core from 10.0.0.1 port 58314 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:54:53.289000 audit[3685]: CRED_ACQ pid=3685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:53.290655 sshd[3685]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:54:53.294831 kernel: audit: type=1101 audit(1734054893.284:329): pid=3685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:53.294874 kernel: audit: type=1103 audit(1734054893.289:330): pid=3685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:53.294901 kernel: audit: type=1006 audit(1734054893.290:331): pid=3685 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 13 01:54:53.294570 systemd-logind[1304]: New session 14 of user core. Dec 13 01:54:53.295493 systemd[1]: Started session-14.scope. Dec 13 01:54:53.290000 audit[3685]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4816ff50 a2=3 a3=0 items=0 ppid=1 pid=3685 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:53.301856 kernel: audit: type=1300 audit(1734054893.290:331): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4816ff50 a2=3 a3=0 items=0 ppid=1 pid=3685 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:53.301908 kernel: audit: type=1327 audit(1734054893.290:331): proctitle=737368643A20636F7265205B707269765D Dec 13 01:54:53.290000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:54:53.299000 audit[3685]: USER_START pid=3685 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:53.308144 kernel: audit: type=1105 audit(1734054893.299:332): pid=3685 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:53.308179 kernel: audit: type=1103 audit(1734054893.301:333): pid=3688 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:53.301000 audit[3688]: CRED_ACQ pid=3688 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:53.401928 sshd[3685]: pam_unix(sshd:session): session closed for user core Dec 13 01:54:53.402000 audit[3685]: USER_END pid=3685 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:53.404353 systemd[1]: sshd@13-10.0.0.88:22-10.0.0.1:58314.service: Deactivated successfully. Dec 13 01:54:53.405525 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:54:53.405551 systemd-logind[1304]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:54:53.406754 systemd-logind[1304]: Removed session 14. Dec 13 01:54:53.402000 audit[3685]: CRED_DISP pid=3685 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:53.411211 kernel: audit: type=1106 audit(1734054893.402:334): pid=3685 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:53.411278 kernel: audit: type=1104 audit(1734054893.402:335): pid=3685 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:53.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.88:22-10.0.0.1:58314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:55.396860 kubelet[2228]: I1213 01:54:55.396819 2228 scope.go:117] "RemoveContainer" containerID="b14fa0042d56264c1f5cb24fc020d3f22c20e5321d246aaa8af8a91b0d909939" Dec 13 01:54:55.397443 env[1318]: time="2024-12-13T01:54:55.397398561Z" level=info msg="StopPodSandbox for \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\"" Dec 13 01:54:55.397848 kubelet[2228]: E1213 01:54:55.397827 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:55.402165 env[1318]: time="2024-12-13T01:54:55.401959512Z" level=info msg="CreateContainer within sandbox \"d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Dec 13 01:54:55.418827 env[1318]: time="2024-12-13T01:54:55.418775816Z" level=info msg="CreateContainer within sandbox \"d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"bdc04db3be08c65cb1a400d47f49237c119feda42fe54f1f131c06b0d380d7ec\"" Dec 13 01:54:55.419618 env[1318]: time="2024-12-13T01:54:55.419592740Z" level=info msg="StartContainer for \"bdc04db3be08c65cb1a400d47f49237c119feda42fe54f1f131c06b0d380d7ec\"" Dec 13 01:54:55.432571 env[1318]: time="2024-12-13T01:54:55.432498193Z" level=error msg="StopPodSandbox for \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\" failed" error="failed to destroy network for sandbox \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:55.432846 kubelet[2228]: E1213 01:54:55.432810 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" Dec 13 01:54:55.432936 kubelet[2228]: E1213 01:54:55.432875 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11"} Dec 13 01:54:55.432936 kubelet[2228]: E1213 01:54:55.432930 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57931067-2814-4ccb-9fc1-1f61db24c542\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:54:55.433023 kubelet[2228]: E1213 01:54:55.432980 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57931067-2814-4ccb-9fc1-1f61db24c542\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f458bd975-shd8j" podUID="57931067-2814-4ccb-9fc1-1f61db24c542" Dec 13 01:54:55.489517 env[1318]: time="2024-12-13T01:54:55.489428863Z" level=info msg="StartContainer for \"bdc04db3be08c65cb1a400d47f49237c119feda42fe54f1f131c06b0d380d7ec\" returns successfully" Dec 13 01:54:55.514245 kubelet[2228]: E1213 01:54:55.514212 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:55.527046 kubelet[2228]: I1213 01:54:55.526990 2228 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-7wdql" podStartSLOduration=18.074638708 podStartE2EDuration="35.526940894s" podCreationTimestamp="2024-12-13 01:54:20 +0000 UTC" firstStartedPulling="2024-12-13 01:54:21.295694582 +0000 UTC m=+20.034879263" lastFinishedPulling="2024-12-13 01:54:38.747996778 +0000 UTC m=+37.487181449" observedRunningTime="2024-12-13 01:54:55.526663784 +0000 UTC m=+54.265848485" watchObservedRunningTime="2024-12-13 01:54:55.526940894 +0000 UTC m=+54.266125585" Dec 13 01:54:55.587456 env[1318]: time="2024-12-13T01:54:55.587403383Z" level=info msg="shim disconnected" id=bdc04db3be08c65cb1a400d47f49237c119feda42fe54f1f131c06b0d380d7ec Dec 13 01:54:55.587456 env[1318]: time="2024-12-13T01:54:55.587457025Z" level=warning msg="cleaning up after shim disconnected" id=bdc04db3be08c65cb1a400d47f49237c119feda42fe54f1f131c06b0d380d7ec namespace=k8s.io Dec 13 01:54:55.587662 env[1318]: time="2024-12-13T01:54:55.587467594Z" level=info msg="cleaning up dead shim" Dec 13 01:54:55.594222 env[1318]: time="2024-12-13T01:54:55.594170846Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3795 runtime=io.containerd.runc.v2\n" Dec 13 01:54:56.414683 systemd[1]: run-containerd-runc-k8s.io-bdc04db3be08c65cb1a400d47f49237c119feda42fe54f1f131c06b0d380d7ec-runc.wMMQ1g.mount: Deactivated successfully. Dec 13 01:54:56.414825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdc04db3be08c65cb1a400d47f49237c119feda42fe54f1f131c06b0d380d7ec-rootfs.mount: Deactivated successfully. Dec 13 01:54:56.517252 kubelet[2228]: I1213 01:54:56.517212 2228 scope.go:117] "RemoveContainer" containerID="b14fa0042d56264c1f5cb24fc020d3f22c20e5321d246aaa8af8a91b0d909939" Dec 13 01:54:56.517606 kubelet[2228]: I1213 01:54:56.517456 2228 scope.go:117] "RemoveContainer" containerID="bdc04db3be08c65cb1a400d47f49237c119feda42fe54f1f131c06b0d380d7ec" Dec 13 01:54:56.517606 kubelet[2228]: E1213 01:54:56.517510 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:56.517890 kubelet[2228]: E1213 01:54:56.517875 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-7wdql_calico-system(1700d6cc-17fc-42bf-b164-298c2c341d88)\"" pod="calico-system/calico-node-7wdql" podUID="1700d6cc-17fc-42bf-b164-298c2c341d88" Dec 13 01:54:56.518785 env[1318]: time="2024-12-13T01:54:56.518737211Z" level=info msg="RemoveContainer for \"b14fa0042d56264c1f5cb24fc020d3f22c20e5321d246aaa8af8a91b0d909939\"" Dec 13 01:54:56.695809 env[1318]: time="2024-12-13T01:54:56.695694970Z" level=info msg="RemoveContainer for \"b14fa0042d56264c1f5cb24fc020d3f22c20e5321d246aaa8af8a91b0d909939\" returns successfully" Dec 13 01:54:57.396892 env[1318]: time="2024-12-13T01:54:57.396844402Z" level=info msg="StopPodSandbox for \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\"" Dec 13 01:54:57.422237 env[1318]: time="2024-12-13T01:54:57.422158202Z" level=error msg="StopPodSandbox for \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\" failed" error="failed to destroy network for sandbox \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:57.422425 kubelet[2228]: E1213 01:54:57.422408 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" Dec 13 01:54:57.422469 kubelet[2228]: E1213 01:54:57.422445 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5"} Dec 13 01:54:57.422499 kubelet[2228]: E1213 01:54:57.422474 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03656298-6b0b-422b-a3a9-1c9ae4e861d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:54:57.422565 kubelet[2228]: E1213 01:54:57.422499 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03656298-6b0b-422b-a3a9-1c9ae4e861d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qc46k" podUID="03656298-6b0b-422b-a3a9-1c9ae4e861d5" Dec 13 01:54:57.521702 kubelet[2228]: I1213 01:54:57.521669 2228 scope.go:117] "RemoveContainer" containerID="bdc04db3be08c65cb1a400d47f49237c119feda42fe54f1f131c06b0d380d7ec" Dec 13 01:54:57.522114 kubelet[2228]: E1213 01:54:57.521757 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:54:57.522235 kubelet[2228]: E1213 01:54:57.522219 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-7wdql_calico-system(1700d6cc-17fc-42bf-b164-298c2c341d88)\"" pod="calico-system/calico-node-7wdql" podUID="1700d6cc-17fc-42bf-b164-298c2c341d88" Dec 13 01:54:58.404705 systemd[1]: Started sshd@14-10.0.0.88:22-10.0.0.1:40972.service. Dec 13 01:54:58.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.88:22-10.0.0.1:40972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:58.406101 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 01:54:58.406175 kernel: audit: type=1130 audit(1734054898.404:337): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.88:22-10.0.0.1:40972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:58.439000 audit[3832]: USER_ACCT pid=3832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:58.440108 sshd[3832]: Accepted publickey for core from 10.0.0.1 port 40972 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:54:58.441792 sshd[3832]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:54:58.441000 audit[3832]: CRED_ACQ pid=3832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:58.445138 systemd-logind[1304]: New session 15 of user core. Dec 13 01:54:58.445794 systemd[1]: Started session-15.scope. Dec 13 01:54:58.448545 kernel: audit: type=1101 audit(1734054898.439:338): pid=3832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:58.448697 kernel: audit: type=1103 audit(1734054898.441:339): pid=3832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:58.451255 kernel: audit: type=1006 audit(1734054898.441:340): pid=3832 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 13 01:54:58.451332 kernel: audit: type=1300 audit(1734054898.441:340): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7017f6f0 a2=3 a3=0 items=0 ppid=1 pid=3832 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:58.441000 audit[3832]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7017f6f0 a2=3 a3=0 items=0 ppid=1 pid=3832 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:54:58.455735 kernel: audit: type=1327 audit(1734054898.441:340): proctitle=737368643A20636F7265205B707269765D Dec 13 01:54:58.441000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:54:58.449000 audit[3832]: USER_START pid=3832 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:58.462153 kernel: audit: type=1105 audit(1734054898.449:341): pid=3832 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:58.462195 kernel: audit: type=1103 audit(1734054898.450:342): pid=3835 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:58.450000 audit[3835]: CRED_ACQ pid=3835 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:58.552903 sshd[3832]: pam_unix(sshd:session): session closed for user core Dec 13 01:54:58.553000 audit[3832]: USER_END pid=3832 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:58.555156 systemd[1]: sshd@14-10.0.0.88:22-10.0.0.1:40972.service: Deactivated successfully. Dec 13 01:54:58.556183 systemd-logind[1304]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:54:58.556202 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:54:58.556901 systemd-logind[1304]: Removed session 15. Dec 13 01:54:58.553000 audit[3832]: CRED_DISP pid=3832 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:58.563196 kernel: audit: type=1106 audit(1734054898.553:343): pid=3832 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:58.563244 kernel: audit: type=1104 audit(1734054898.553:344): pid=3832 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:54:58.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.88:22-10.0.0.1:40972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:54:59.396957 env[1318]: time="2024-12-13T01:54:59.396883899Z" level=info msg="StopPodSandbox for \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\"" Dec 13 01:54:59.419361 env[1318]: time="2024-12-13T01:54:59.419288858Z" level=error msg="StopPodSandbox for \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\" failed" error="failed to destroy network for sandbox \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:54:59.419524 kubelet[2228]: E1213 01:54:59.419505 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" Dec 13 01:54:59.419765 kubelet[2228]: E1213 01:54:59.419545 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b"} Dec 13 01:54:59.419765 kubelet[2228]: E1213 01:54:59.419578 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c24c6b5a-d5bf-438a-ad13-509ca76dd573\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:54:59.419765 kubelet[2228]: E1213 01:54:59.419605 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c24c6b5a-d5bf-438a-ad13-509ca76dd573\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q9qs2" podUID="c24c6b5a-d5bf-438a-ad13-509ca76dd573" Dec 13 01:55:01.396400 env[1318]: time="2024-12-13T01:55:01.396355849Z" level=info msg="StopPodSandbox for \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\"" Dec 13 01:55:01.396726 env[1318]: time="2024-12-13T01:55:01.396431426Z" level=info msg="StopPodSandbox for \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\"" Dec 13 01:55:01.396726 env[1318]: time="2024-12-13T01:55:01.396366801Z" level=info msg="StopPodSandbox for \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\"" Dec 13 01:55:01.424932 env[1318]: time="2024-12-13T01:55:01.424856435Z" level=error msg="StopPodSandbox for \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\" failed" error="failed to destroy network for sandbox \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.425099 kubelet[2228]: E1213 01:55:01.425079 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" Dec 13 01:55:01.425387 kubelet[2228]: E1213 01:55:01.425122 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747"} Dec 13 01:55:01.425387 kubelet[2228]: E1213 01:55:01.425154 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"210d3ffb-3280-4c87-8159-32ab42140bc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:01.425387 kubelet[2228]: E1213 01:55:01.425182 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"210d3ffb-3280-4c87-8159-32ab42140bc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5784655f99-jwwjd" podUID="210d3ffb-3280-4c87-8159-32ab42140bc1" Dec 13 01:55:01.428678 env[1318]: time="2024-12-13T01:55:01.428629430Z" level=error msg="StopPodSandbox for \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\" failed" error="failed to destroy network for sandbox \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.428812 kubelet[2228]: E1213 01:55:01.428780 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" Dec 13 01:55:01.428920 kubelet[2228]: E1213 01:55:01.428820 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3"} Dec 13 01:55:01.428920 kubelet[2228]: E1213 01:55:01.428854 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e3c9f76-8cdf-4757-acdc-92eda3454b96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:01.428920 kubelet[2228]: E1213 01:55:01.428878 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e3c9f76-8cdf-4757-acdc-92eda3454b96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f458bd975-jfg7r" podUID="2e3c9f76-8cdf-4757-acdc-92eda3454b96" Dec 13 01:55:01.429189 env[1318]: time="2024-12-13T01:55:01.429137716Z" level=error msg="StopPodSandbox for \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\" failed" error="failed to destroy network for sandbox \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.429382 kubelet[2228]: E1213 01:55:01.429364 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" Dec 13 01:55:01.429382 kubelet[2228]: E1213 01:55:01.429384 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef"} Dec 13 01:55:01.429463 kubelet[2228]: E1213 01:55:01.429412 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7369f4a7-4a25-4cba-bc4e-08b9ad330777\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:01.429463 kubelet[2228]: E1213 01:55:01.429431 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7369f4a7-4a25-4cba-bc4e-08b9ad330777\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t2vq9" podUID="7369f4a7-4a25-4cba-bc4e-08b9ad330777" Dec 13 01:55:03.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.88:22-10.0.0.1:40976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:03.556129 systemd[1]: Started sshd@15-10.0.0.88:22-10.0.0.1:40976.service. Dec 13 01:55:03.557713 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 01:55:03.557795 kernel: audit: type=1130 audit(1734054903.555:346): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.88:22-10.0.0.1:40976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:03.590000 audit[3945]: USER_ACCT pid=3945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:03.591871 sshd[3945]: Accepted publickey for core from 10.0.0.1 port 40976 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:55:03.593341 sshd[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:03.591000 audit[3945]: CRED_ACQ pid=3945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:03.596809 systemd-logind[1304]: New session 16 of user core. Dec 13 01:55:03.597819 systemd[1]: Started session-16.scope. Dec 13 01:55:03.601237 kernel: audit: type=1101 audit(1734054903.590:347): pid=3945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:03.601313 kernel: audit: type=1103 audit(1734054903.591:348): pid=3945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:03.591000 audit[3945]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdff787940 a2=3 a3=0 items=0 ppid=1 pid=3945 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:03.608624 kernel: audit: type=1006 audit(1734054903.591:349): pid=3945 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 13 01:55:03.608673 kernel: audit: type=1300 audit(1734054903.591:349): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdff787940 a2=3 a3=0 items=0 ppid=1 pid=3945 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:03.608702 kernel: audit: type=1327 audit(1734054903.591:349): proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:03.591000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:03.601000 audit[3945]: USER_START pid=3945 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:03.614191 kernel: audit: type=1105 audit(1734054903.601:350): pid=3945 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:03.614238 kernel: audit: type=1103 audit(1734054903.602:351): pid=3948 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:03.602000 audit[3948]: CRED_ACQ pid=3948 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:03.703410 sshd[3945]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:03.703000 audit[3945]: USER_END pid=3945 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:03.705414 systemd[1]: sshd@15-10.0.0.88:22-10.0.0.1:40976.service: Deactivated successfully. Dec 13 01:55:03.706318 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:55:03.703000 audit[3945]: CRED_DISP pid=3945 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:03.709796 systemd-logind[1304]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:55:03.710465 systemd-logind[1304]: Removed session 16. Dec 13 01:55:03.712528 kernel: audit: type=1106 audit(1734054903.703:352): pid=3945 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:03.712587 kernel: audit: type=1104 audit(1734054903.703:353): pid=3945 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:03.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.88:22-10.0.0.1:40976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:08.706779 systemd[1]: Started sshd@16-10.0.0.88:22-10.0.0.1:48998.service. Dec 13 01:55:08.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.88:22-10.0.0.1:48998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:08.707733 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 01:55:08.707781 kernel: audit: type=1130 audit(1734054908.705:355): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.88:22-10.0.0.1:48998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:08.738000 audit[3960]: USER_ACCT pid=3960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:08.739783 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 48998 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:55:08.742673 sshd[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:08.741000 audit[3960]: CRED_ACQ pid=3960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:08.746465 systemd-logind[1304]: New session 17 of user core. Dec 13 01:55:08.747051 kernel: audit: type=1101 audit(1734054908.738:356): pid=3960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:08.747094 kernel: audit: type=1103 audit(1734054908.741:357): pid=3960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:08.747115 kernel: audit: type=1006 audit(1734054908.741:358): pid=3960 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 13 01:55:08.747074 systemd[1]: Started session-17.scope. Dec 13 01:55:08.749427 kernel: audit: type=1300 audit(1734054908.741:358): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe28e3dc50 a2=3 a3=0 items=0 ppid=1 pid=3960 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:08.741000 audit[3960]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe28e3dc50 a2=3 a3=0 items=0 ppid=1 pid=3960 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:08.753305 kernel: audit: type=1327 audit(1734054908.741:358): proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:08.741000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:08.750000 audit[3960]: USER_START pid=3960 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:08.758753 kernel: audit: type=1105 audit(1734054908.750:359): pid=3960 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:08.758794 kernel: audit: type=1103 audit(1734054908.751:360): pid=3963 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:08.751000 audit[3963]: CRED_ACQ pid=3963 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:08.882989 sshd[3960]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:08.882000 audit[3960]: USER_END pid=3960 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:08.885553 systemd[1]: sshd@16-10.0.0.88:22-10.0.0.1:48998.service: Deactivated successfully. Dec 13 01:55:08.886302 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:55:08.882000 audit[3960]: CRED_DISP pid=3960 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:08.890896 systemd-logind[1304]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:55:08.891651 systemd-logind[1304]: Removed session 17. Dec 13 01:55:08.891887 kernel: audit: type=1106 audit(1734054908.882:361): pid=3960 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:08.891925 kernel: audit: type=1104 audit(1734054908.882:362): pid=3960 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:08.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.88:22-10.0.0.1:48998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:09.396397 env[1318]: time="2024-12-13T01:55:09.396348238Z" level=info msg="StopPodSandbox for \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\"" Dec 13 01:55:09.422666 env[1318]: time="2024-12-13T01:55:09.422592236Z" level=error msg="StopPodSandbox for \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\" failed" error="failed to destroy network for sandbox \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:09.422922 kubelet[2228]: E1213 01:55:09.422882 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" Dec 13 01:55:09.423200 kubelet[2228]: E1213 01:55:09.422942 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11"} Dec 13 01:55:09.423200 kubelet[2228]: E1213 01:55:09.422990 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57931067-2814-4ccb-9fc1-1f61db24c542\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:09.423200 kubelet[2228]: E1213 01:55:09.423030 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57931067-2814-4ccb-9fc1-1f61db24c542\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f458bd975-shd8j" podUID="57931067-2814-4ccb-9fc1-1f61db24c542" Dec 13 01:55:11.396782 kubelet[2228]: I1213 01:55:11.396735 2228 scope.go:117] "RemoveContainer" containerID="bdc04db3be08c65cb1a400d47f49237c119feda42fe54f1f131c06b0d380d7ec" Dec 13 01:55:11.397178 kubelet[2228]: E1213 01:55:11.396836 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:11.397385 kubelet[2228]: E1213 01:55:11.397352 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-7wdql_calico-system(1700d6cc-17fc-42bf-b164-298c2c341d88)\"" pod="calico-system/calico-node-7wdql" podUID="1700d6cc-17fc-42bf-b164-298c2c341d88" Dec 13 01:55:12.396990 env[1318]: time="2024-12-13T01:55:12.396942581Z" level=info msg="StopPodSandbox for \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\"" Dec 13 01:55:12.419712 env[1318]: time="2024-12-13T01:55:12.419650567Z" level=error msg="StopPodSandbox for \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\" failed" error="failed to destroy network for sandbox \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:12.419932 kubelet[2228]: E1213 01:55:12.419901 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" Dec 13 01:55:12.420197 kubelet[2228]: E1213 01:55:12.419947 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5"} Dec 13 01:55:12.420197 kubelet[2228]: E1213 01:55:12.419982 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03656298-6b0b-422b-a3a9-1c9ae4e861d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:12.420197 kubelet[2228]: E1213 01:55:12.420011 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03656298-6b0b-422b-a3a9-1c9ae4e861d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qc46k" podUID="03656298-6b0b-422b-a3a9-1c9ae4e861d5" Dec 13 01:55:13.885602 systemd[1]: Started sshd@17-10.0.0.88:22-10.0.0.1:49014.service. Dec 13 01:55:13.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.88:22-10.0.0.1:49014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:13.886643 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 01:55:13.886668 kernel: audit: type=1130 audit(1734054913.884:364): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.88:22-10.0.0.1:49014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:13.916000 audit[4020]: USER_ACCT pid=4020 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:13.918264 sshd[4020]: Accepted publickey for core from 10.0.0.1 port 49014 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:55:13.920204 sshd[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:13.918000 audit[4020]: CRED_ACQ pid=4020 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:13.923517 systemd-logind[1304]: New session 18 of user core. Dec 13 01:55:13.924110 systemd[1]: Started session-18.scope. Dec 13 01:55:13.927888 kernel: audit: type=1101 audit(1734054913.916:365): pid=4020 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:13.927950 kernel: audit: type=1103 audit(1734054913.918:366): pid=4020 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:13.927970 kernel: audit: type=1006 audit(1734054913.918:367): pid=4020 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 13 01:55:13.918000 audit[4020]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda6c0e060 a2=3 a3=0 items=0 ppid=1 pid=4020 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:13.935111 kernel: audit: type=1300 audit(1734054913.918:367): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda6c0e060 a2=3 a3=0 items=0 ppid=1 pid=4020 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:13.935158 kernel: audit: type=1327 audit(1734054913.918:367): proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:13.918000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:13.928000 audit[4020]: USER_START pid=4020 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:13.941259 kernel: audit: type=1105 audit(1734054913.928:368): pid=4020 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:13.941319 kernel: audit: type=1103 audit(1734054913.929:369): pid=4023 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:13.929000 audit[4023]: CRED_ACQ pid=4023 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:14.027324 sshd[4020]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:14.026000 audit[4020]: USER_END pid=4020 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:14.029218 systemd[1]: sshd@17-10.0.0.88:22-10.0.0.1:49014.service: Deactivated successfully. Dec 13 01:55:14.030430 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:55:14.030778 systemd-logind[1304]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:55:14.031457 systemd-logind[1304]: Removed session 18. Dec 13 01:55:14.026000 audit[4020]: CRED_DISP pid=4020 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:14.037636 kernel: audit: type=1106 audit(1734054914.026:370): pid=4020 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:14.037677 kernel: audit: type=1104 audit(1734054914.026:371): pid=4020 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:14.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.88:22-10.0.0.1:49014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:14.396909 env[1318]: time="2024-12-13T01:55:14.396878619Z" level=info msg="StopPodSandbox for \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\"" Dec 13 01:55:14.397369 env[1318]: time="2024-12-13T01:55:14.397321780Z" level=info msg="StopPodSandbox for \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\"" Dec 13 01:55:14.397483 env[1318]: time="2024-12-13T01:55:14.397191950Z" level=info msg="StopPodSandbox for \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\"" Dec 13 01:55:14.420432 env[1318]: time="2024-12-13T01:55:14.420376575Z" level=error msg="StopPodSandbox for \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\" failed" error="failed to destroy network for sandbox \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:14.420824 kubelet[2228]: E1213 01:55:14.420802 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" Dec 13 01:55:14.421090 kubelet[2228]: E1213 01:55:14.420850 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b"} Dec 13 01:55:14.421090 kubelet[2228]: E1213 01:55:14.420888 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c24c6b5a-d5bf-438a-ad13-509ca76dd573\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:14.421090 kubelet[2228]: E1213 01:55:14.420916 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c24c6b5a-d5bf-438a-ad13-509ca76dd573\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q9qs2" podUID="c24c6b5a-d5bf-438a-ad13-509ca76dd573" Dec 13 01:55:14.430667 env[1318]: time="2024-12-13T01:55:14.430614513Z" level=error msg="StopPodSandbox for \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\" failed" error="failed to destroy network for sandbox \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:14.430890 kubelet[2228]: E1213 01:55:14.430863 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" Dec 13 01:55:14.430943 kubelet[2228]: E1213 01:55:14.430913 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747"} Dec 13 01:55:14.430968 kubelet[2228]: E1213 01:55:14.430960 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"210d3ffb-3280-4c87-8159-32ab42140bc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:14.431027 kubelet[2228]: E1213 01:55:14.430999 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"210d3ffb-3280-4c87-8159-32ab42140bc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5784655f99-jwwjd" podUID="210d3ffb-3280-4c87-8159-32ab42140bc1" Dec 13 01:55:14.431357 env[1318]: time="2024-12-13T01:55:14.431317643Z" level=error msg="StopPodSandbox for \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\" failed" error="failed to destroy network for sandbox \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:14.431545 kubelet[2228]: E1213 01:55:14.431527 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" Dec 13 01:55:14.431590 kubelet[2228]: E1213 01:55:14.431563 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3"} Dec 13 01:55:14.431612 kubelet[2228]: E1213 01:55:14.431597 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e3c9f76-8cdf-4757-acdc-92eda3454b96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:14.431657 kubelet[2228]: E1213 01:55:14.431622 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e3c9f76-8cdf-4757-acdc-92eda3454b96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f458bd975-jfg7r" podUID="2e3c9f76-8cdf-4757-acdc-92eda3454b96" Dec 13 01:55:16.397029 env[1318]: time="2024-12-13T01:55:16.396955146Z" level=info msg="StopPodSandbox for \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\"" Dec 13 01:55:16.421899 env[1318]: time="2024-12-13T01:55:16.421831812Z" level=error msg="StopPodSandbox for \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\" failed" error="failed to destroy network for sandbox \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:16.422110 kubelet[2228]: E1213 01:55:16.422078 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" Dec 13 01:55:16.422430 kubelet[2228]: E1213 01:55:16.422123 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef"} Dec 13 01:55:16.422430 kubelet[2228]: E1213 01:55:16.422156 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7369f4a7-4a25-4cba-bc4e-08b9ad330777\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:16.422430 kubelet[2228]: E1213 01:55:16.422189 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7369f4a7-4a25-4cba-bc4e-08b9ad330777\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t2vq9" podUID="7369f4a7-4a25-4cba-bc4e-08b9ad330777" Dec 13 01:55:17.397199 kubelet[2228]: E1213 01:55:17.397145 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:18.397198 kubelet[2228]: E1213 01:55:18.397123 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:19.030225 systemd[1]: Started sshd@18-10.0.0.88:22-10.0.0.1:52450.service. Dec 13 01:55:19.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.88:22-10.0.0.1:52450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:19.032842 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 01:55:19.032881 kernel: audit: type=1130 audit(1734054919.030:373): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.88:22-10.0.0.1:52450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:19.063000 audit[4132]: USER_ACCT pid=4132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:19.064106 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 52450 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:55:19.065645 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:19.065000 audit[4132]: CRED_ACQ pid=4132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:19.068723 systemd-logind[1304]: New session 19 of user core. Dec 13 01:55:19.069412 systemd[1]: Started session-19.scope. Dec 13 01:55:19.072289 kernel: audit: type=1101 audit(1734054919.063:374): pid=4132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:19.072355 kernel: audit: type=1103 audit(1734054919.065:375): pid=4132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:19.072372 kernel: audit: type=1006 audit(1734054919.065:376): pid=4132 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 13 01:55:19.065000 audit[4132]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffaaee50e0 a2=3 a3=0 items=0 ppid=1 pid=4132 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:19.079138 kernel: audit: type=1300 audit(1734054919.065:376): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffaaee50e0 a2=3 a3=0 items=0 ppid=1 pid=4132 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:19.079188 kernel: audit: type=1327 audit(1734054919.065:376): proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:19.065000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:19.080559 kernel: audit: type=1105 audit(1734054919.073:377): pid=4132 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:19.073000 audit[4132]: USER_START pid=4132 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:19.085129 kernel: audit: type=1103 audit(1734054919.074:378): pid=4135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:19.074000 audit[4135]: CRED_ACQ pid=4135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:19.169106 sshd[4132]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:19.169000 audit[4132]: USER_END pid=4132 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:19.171615 systemd[1]: sshd@18-10.0.0.88:22-10.0.0.1:52450.service: Deactivated successfully. Dec 13 01:55:19.172725 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:55:19.172753 systemd-logind[1304]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:55:19.173726 systemd-logind[1304]: Removed session 19. Dec 13 01:55:19.169000 audit[4132]: CRED_DISP pid=4132 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:19.178618 kernel: audit: type=1106 audit(1734054919.169:379): pid=4132 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:19.178660 kernel: audit: type=1104 audit(1734054919.169:380): pid=4132 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:19.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.88:22-10.0.0.1:52450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:20.396717 kubelet[2228]: E1213 01:55:20.396674 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:21.397181 env[1318]: time="2024-12-13T01:55:21.397132745Z" level=info msg="StopPodSandbox for \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\"" Dec 13 01:55:21.417549 env[1318]: time="2024-12-13T01:55:21.417510229Z" level=info msg="StopPodSandbox for \"d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab\"" Dec 13 01:55:21.417875 env[1318]: time="2024-12-13T01:55:21.417852112Z" level=info msg="Container to stop \"bdc04db3be08c65cb1a400d47f49237c119feda42fe54f1f131c06b0d380d7ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:55:21.418007 env[1318]: time="2024-12-13T01:55:21.417963496Z" level=info msg="Container to stop \"262ba1a55ecdc38ae02185a6a149dbf708e03b5f6a6cf08307dbde097646d480\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:55:21.418121 env[1318]: time="2024-12-13T01:55:21.418098885Z" level=info msg="Container to stop \"185971a1ca55da444e9944bf75667ec7e77df5385d68c6b258d3ad1822966fcc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:55:21.420630 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab-shm.mount: Deactivated successfully. Dec 13 01:55:21.423863 env[1318]: time="2024-12-13T01:55:21.423830667Z" level=error msg="StopPodSandbox for \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\" failed" error="failed to destroy network for sandbox \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:21.424157 kubelet[2228]: E1213 01:55:21.424125 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" Dec 13 01:55:21.424402 kubelet[2228]: E1213 01:55:21.424174 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11"} Dec 13 01:55:21.424402 kubelet[2228]: E1213 01:55:21.424208 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57931067-2814-4ccb-9fc1-1f61db24c542\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:21.424402 kubelet[2228]: E1213 01:55:21.424238 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57931067-2814-4ccb-9fc1-1f61db24c542\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f458bd975-shd8j" podUID="57931067-2814-4ccb-9fc1-1f61db24c542" Dec 13 01:55:21.437347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab-rootfs.mount: Deactivated successfully. Dec 13 01:55:21.578850 env[1318]: time="2024-12-13T01:55:21.578800633Z" level=info msg="shim disconnected" id=d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab Dec 13 01:55:21.578850 env[1318]: time="2024-12-13T01:55:21.578841491Z" level=warning msg="cleaning up after shim disconnected" id=d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab namespace=k8s.io Dec 13 01:55:21.578850 env[1318]: time="2024-12-13T01:55:21.578850307Z" level=info msg="cleaning up dead shim" Dec 13 01:55:21.584987 env[1318]: time="2024-12-13T01:55:21.584931018Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:55:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4191 runtime=io.containerd.runc.v2\n" Dec 13 01:55:21.585255 env[1318]: time="2024-12-13T01:55:21.585231642Z" level=info msg="TearDown network for sandbox \"d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab\" successfully" Dec 13 01:55:21.585353 env[1318]: time="2024-12-13T01:55:21.585254406Z" level=info msg="StopPodSandbox for \"d709618f478408390c0773ab47b93eff31b69ebc83b8568daf0b723abcf5bcab\" returns successfully" Dec 13 01:55:21.649114 kubelet[2228]: I1213 01:55:21.648587 2228 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-lib-modules\") pod \"1700d6cc-17fc-42bf-b164-298c2c341d88\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " Dec 13 01:55:21.649114 kubelet[2228]: I1213 01:55:21.648626 2228 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-cni-bin-dir\") pod \"1700d6cc-17fc-42bf-b164-298c2c341d88\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " Dec 13 01:55:21.649114 kubelet[2228]: I1213 01:55:21.648645 2228 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-xtables-lock\") pod \"1700d6cc-17fc-42bf-b164-298c2c341d88\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " Dec 13 01:55:21.649114 kubelet[2228]: I1213 01:55:21.648661 2228 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-var-run-calico\") pod \"1700d6cc-17fc-42bf-b164-298c2c341d88\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " Dec 13 01:55:21.649114 kubelet[2228]: I1213 01:55:21.648678 2228 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-cni-log-dir\") pod \"1700d6cc-17fc-42bf-b164-298c2c341d88\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " Dec 13 01:55:21.649114 kubelet[2228]: I1213 01:55:21.648695 2228 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-cni-net-dir\") pod \"1700d6cc-17fc-42bf-b164-298c2c341d88\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " Dec 13 01:55:21.649431 kubelet[2228]: I1213 01:55:21.648719 2228 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gcqs\" (UniqueName: \"kubernetes.io/projected/1700d6cc-17fc-42bf-b164-298c2c341d88-kube-api-access-2gcqs\") pod \"1700d6cc-17fc-42bf-b164-298c2c341d88\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " Dec 13 01:55:21.649431 kubelet[2228]: I1213 01:55:21.648720 2228 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "1700d6cc-17fc-42bf-b164-298c2c341d88" (UID: "1700d6cc-17fc-42bf-b164-298c2c341d88"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:55:21.649431 kubelet[2228]: I1213 01:55:21.648720 2228 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1700d6cc-17fc-42bf-b164-298c2c341d88" (UID: "1700d6cc-17fc-42bf-b164-298c2c341d88"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:55:21.649431 kubelet[2228]: I1213 01:55:21.648740 2228 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1700d6cc-17fc-42bf-b164-298c2c341d88-node-certs\") pod \"1700d6cc-17fc-42bf-b164-298c2c341d88\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " Dec 13 01:55:21.649431 kubelet[2228]: I1213 01:55:21.648773 2228 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "1700d6cc-17fc-42bf-b164-298c2c341d88" (UID: "1700d6cc-17fc-42bf-b164-298c2c341d88"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:55:21.649554 kubelet[2228]: I1213 01:55:21.648790 2228 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1700d6cc-17fc-42bf-b164-298c2c341d88" (UID: "1700d6cc-17fc-42bf-b164-298c2c341d88"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:55:21.649554 kubelet[2228]: I1213 01:55:21.648804 2228 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-flexvol-driver-host\") pod \"1700d6cc-17fc-42bf-b164-298c2c341d88\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " Dec 13 01:55:21.649554 kubelet[2228]: I1213 01:55:21.648818 2228 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "1700d6cc-17fc-42bf-b164-298c2c341d88" (UID: "1700d6cc-17fc-42bf-b164-298c2c341d88"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:55:21.649554 kubelet[2228]: I1213 01:55:21.648832 2228 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-policysync\") pod \"1700d6cc-17fc-42bf-b164-298c2c341d88\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " Dec 13 01:55:21.649554 kubelet[2228]: I1213 01:55:21.648835 2228 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "1700d6cc-17fc-42bf-b164-298c2c341d88" (UID: "1700d6cc-17fc-42bf-b164-298c2c341d88"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:55:21.649669 kubelet[2228]: I1213 01:55:21.648849 2228 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "1700d6cc-17fc-42bf-b164-298c2c341d88" (UID: "1700d6cc-17fc-42bf-b164-298c2c341d88"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:55:21.649669 kubelet[2228]: I1213 01:55:21.648856 2228 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1700d6cc-17fc-42bf-b164-298c2c341d88-tigera-ca-bundle\") pod \"1700d6cc-17fc-42bf-b164-298c2c341d88\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " Dec 13 01:55:21.649669 kubelet[2228]: I1213 01:55:21.648863 2228 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-policysync" (OuterVolumeSpecName: "policysync") pod "1700d6cc-17fc-42bf-b164-298c2c341d88" (UID: "1700d6cc-17fc-42bf-b164-298c2c341d88"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:55:21.649669 kubelet[2228]: I1213 01:55:21.648875 2228 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-var-lib-calico\") pod \"1700d6cc-17fc-42bf-b164-298c2c341d88\" (UID: \"1700d6cc-17fc-42bf-b164-298c2c341d88\") " Dec 13 01:55:21.649669 kubelet[2228]: I1213 01:55:21.648916 2228 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 01:55:21.649669 kubelet[2228]: I1213 01:55:21.648928 2228 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Dec 13 01:55:21.649807 kubelet[2228]: I1213 01:55:21.648936 2228 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 01:55:21.649807 kubelet[2228]: I1213 01:55:21.648945 2228 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-var-run-calico\") on node \"localhost\" DevicePath \"\"" Dec 13 01:55:21.649807 kubelet[2228]: I1213 01:55:21.648954 2228 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Dec 13 01:55:21.649807 kubelet[2228]: I1213 01:55:21.648961 2228 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Dec 13 01:55:21.649807 kubelet[2228]: I1213 01:55:21.648969 2228 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-policysync\") on node \"localhost\" DevicePath \"\"" Dec 13 01:55:21.649807 kubelet[2228]: I1213 01:55:21.648978 2228 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Dec 13 01:55:21.649807 kubelet[2228]: I1213 01:55:21.648994 2228 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "1700d6cc-17fc-42bf-b164-298c2c341d88" (UID: "1700d6cc-17fc-42bf-b164-298c2c341d88"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:55:21.653243 systemd[1]: var-lib-kubelet-pods-1700d6cc\x2d17fc\x2d42bf\x2db164\x2d298c2c341d88-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2gcqs.mount: Deactivated successfully. Dec 13 01:55:21.653384 systemd[1]: var-lib-kubelet-pods-1700d6cc\x2d17fc\x2d42bf\x2db164\x2d298c2c341d88-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Dec 13 01:55:21.655988 systemd[1]: var-lib-kubelet-pods-1700d6cc\x2d17fc\x2d42bf\x2db164\x2d298c2c341d88-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Dec 13 01:55:21.656974 kubelet[2228]: I1213 01:55:21.656952 2228 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1700d6cc-17fc-42bf-b164-298c2c341d88-kube-api-access-2gcqs" (OuterVolumeSpecName: "kube-api-access-2gcqs") pod "1700d6cc-17fc-42bf-b164-298c2c341d88" (UID: "1700d6cc-17fc-42bf-b164-298c2c341d88"). InnerVolumeSpecName "kube-api-access-2gcqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:55:21.657032 kubelet[2228]: I1213 01:55:21.656974 2228 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1700d6cc-17fc-42bf-b164-298c2c341d88-node-certs" (OuterVolumeSpecName: "node-certs") pod "1700d6cc-17fc-42bf-b164-298c2c341d88" (UID: "1700d6cc-17fc-42bf-b164-298c2c341d88"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:55:21.657479 kubelet[2228]: I1213 01:55:21.657454 2228 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1700d6cc-17fc-42bf-b164-298c2c341d88-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "1700d6cc-17fc-42bf-b164-298c2c341d88" (UID: "1700d6cc-17fc-42bf-b164-298c2c341d88"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:55:21.749402 kubelet[2228]: I1213 01:55:21.749362 2228 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2gcqs\" (UniqueName: \"kubernetes.io/projected/1700d6cc-17fc-42bf-b164-298c2c341d88-kube-api-access-2gcqs\") on node \"localhost\" DevicePath \"\"" Dec 13 01:55:21.749402 kubelet[2228]: I1213 01:55:21.749391 2228 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1700d6cc-17fc-42bf-b164-298c2c341d88-node-certs\") on node \"localhost\" DevicePath \"\"" Dec 13 01:55:21.749402 kubelet[2228]: I1213 01:55:21.749401 2228 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1700d6cc-17fc-42bf-b164-298c2c341d88-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 13 01:55:21.749402 kubelet[2228]: I1213 01:55:21.749410 2228 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1700d6cc-17fc-42bf-b164-298c2c341d88-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Dec 13 01:55:21.807153 kubelet[2228]: I1213 01:55:21.807119 2228 topology_manager.go:215] "Topology Admit Handler" podUID="53532c5a-b516-4c6d-94d8-9c85251ae8db" podNamespace="calico-system" podName="calico-node-xr9fc" Dec 13 01:55:21.807329 kubelet[2228]: E1213 01:55:21.807199 2228 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1700d6cc-17fc-42bf-b164-298c2c341d88" containerName="calico-node" Dec 13 01:55:21.807329 kubelet[2228]: E1213 01:55:21.807215 2228 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1700d6cc-17fc-42bf-b164-298c2c341d88" containerName="flexvol-driver" Dec 13 01:55:21.807329 kubelet[2228]: E1213 01:55:21.807222 2228 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1700d6cc-17fc-42bf-b164-298c2c341d88" containerName="install-cni" Dec 13 01:55:21.807329 kubelet[2228]: E1213 01:55:21.807228 2228 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1700d6cc-17fc-42bf-b164-298c2c341d88" containerName="calico-node" Dec 13 01:55:21.807329 kubelet[2228]: I1213 01:55:21.807257 2228 memory_manager.go:354] "RemoveStaleState removing state" podUID="1700d6cc-17fc-42bf-b164-298c2c341d88" containerName="calico-node" Dec 13 01:55:21.807329 kubelet[2228]: I1213 01:55:21.807283 2228 memory_manager.go:354] "RemoveStaleState removing state" podUID="1700d6cc-17fc-42bf-b164-298c2c341d88" containerName="calico-node" Dec 13 01:55:21.807329 kubelet[2228]: I1213 01:55:21.807289 2228 memory_manager.go:354] "RemoveStaleState removing state" podUID="1700d6cc-17fc-42bf-b164-298c2c341d88" containerName="calico-node" Dec 13 01:55:21.807329 kubelet[2228]: E1213 01:55:21.807310 2228 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1700d6cc-17fc-42bf-b164-298c2c341d88" containerName="calico-node" Dec 13 01:55:21.849965 kubelet[2228]: I1213 01:55:21.849909 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxkp5\" (UniqueName: \"kubernetes.io/projected/53532c5a-b516-4c6d-94d8-9c85251ae8db-kube-api-access-wxkp5\") pod \"calico-node-xr9fc\" (UID: \"53532c5a-b516-4c6d-94d8-9c85251ae8db\") " pod="calico-system/calico-node-xr9fc" Dec 13 01:55:21.849965 kubelet[2228]: I1213 01:55:21.849970 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/53532c5a-b516-4c6d-94d8-9c85251ae8db-cni-bin-dir\") pod \"calico-node-xr9fc\" (UID: \"53532c5a-b516-4c6d-94d8-9c85251ae8db\") " pod="calico-system/calico-node-xr9fc" Dec 13 01:55:21.850153 kubelet[2228]: I1213 01:55:21.849990 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53532c5a-b516-4c6d-94d8-9c85251ae8db-lib-modules\") pod \"calico-node-xr9fc\" (UID: \"53532c5a-b516-4c6d-94d8-9c85251ae8db\") " pod="calico-system/calico-node-xr9fc" Dec 13 01:55:21.850153 kubelet[2228]: I1213 01:55:21.850009 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/53532c5a-b516-4c6d-94d8-9c85251ae8db-policysync\") pod \"calico-node-xr9fc\" (UID: \"53532c5a-b516-4c6d-94d8-9c85251ae8db\") " pod="calico-system/calico-node-xr9fc" Dec 13 01:55:21.850153 kubelet[2228]: I1213 01:55:21.850031 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53532c5a-b516-4c6d-94d8-9c85251ae8db-tigera-ca-bundle\") pod \"calico-node-xr9fc\" (UID: \"53532c5a-b516-4c6d-94d8-9c85251ae8db\") " pod="calico-system/calico-node-xr9fc" Dec 13 01:55:21.850153 kubelet[2228]: I1213 01:55:21.850049 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/53532c5a-b516-4c6d-94d8-9c85251ae8db-flexvol-driver-host\") pod \"calico-node-xr9fc\" (UID: \"53532c5a-b516-4c6d-94d8-9c85251ae8db\") " pod="calico-system/calico-node-xr9fc" Dec 13 01:55:21.850153 kubelet[2228]: I1213 01:55:21.850069 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53532c5a-b516-4c6d-94d8-9c85251ae8db-xtables-lock\") pod \"calico-node-xr9fc\" (UID: \"53532c5a-b516-4c6d-94d8-9c85251ae8db\") " pod="calico-system/calico-node-xr9fc" Dec 13 01:55:21.850284 kubelet[2228]: I1213 01:55:21.850085 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/53532c5a-b516-4c6d-94d8-9c85251ae8db-cni-net-dir\") pod \"calico-node-xr9fc\" (UID: \"53532c5a-b516-4c6d-94d8-9c85251ae8db\") " pod="calico-system/calico-node-xr9fc" Dec 13 01:55:21.850284 kubelet[2228]: I1213 01:55:21.850100 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/53532c5a-b516-4c6d-94d8-9c85251ae8db-cni-log-dir\") pod \"calico-node-xr9fc\" (UID: \"53532c5a-b516-4c6d-94d8-9c85251ae8db\") " pod="calico-system/calico-node-xr9fc" Dec 13 01:55:21.850284 kubelet[2228]: I1213 01:55:21.850117 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/53532c5a-b516-4c6d-94d8-9c85251ae8db-node-certs\") pod \"calico-node-xr9fc\" (UID: \"53532c5a-b516-4c6d-94d8-9c85251ae8db\") " pod="calico-system/calico-node-xr9fc" Dec 13 01:55:21.850284 kubelet[2228]: I1213 01:55:21.850135 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/53532c5a-b516-4c6d-94d8-9c85251ae8db-var-run-calico\") pod \"calico-node-xr9fc\" (UID: \"53532c5a-b516-4c6d-94d8-9c85251ae8db\") " pod="calico-system/calico-node-xr9fc" Dec 13 01:55:21.850284 kubelet[2228]: I1213 01:55:21.850154 2228 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/53532c5a-b516-4c6d-94d8-9c85251ae8db-var-lib-calico\") pod \"calico-node-xr9fc\" (UID: \"53532c5a-b516-4c6d-94d8-9c85251ae8db\") " pod="calico-system/calico-node-xr9fc" Dec 13 01:55:22.110458 kubelet[2228]: E1213 01:55:22.110427 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:22.110937 env[1318]: time="2024-12-13T01:55:22.110901688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xr9fc,Uid:53532c5a-b516-4c6d-94d8-9c85251ae8db,Namespace:calico-system,Attempt:0,}" Dec 13 01:55:22.261308 env[1318]: time="2024-12-13T01:55:22.261228733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:22.261308 env[1318]: time="2024-12-13T01:55:22.261276645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:22.261308 env[1318]: time="2024-12-13T01:55:22.261288057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:22.261738 env[1318]: time="2024-12-13T01:55:22.261688974Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27d5aa674c7c435332c93682097b42a35d7d674ece23e1c3da1155bb1ab5ebc2 pid=4215 runtime=io.containerd.runc.v2 Dec 13 01:55:22.291549 env[1318]: time="2024-12-13T01:55:22.291499762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xr9fc,Uid:53532c5a-b516-4c6d-94d8-9c85251ae8db,Namespace:calico-system,Attempt:0,} returns sandbox id \"27d5aa674c7c435332c93682097b42a35d7d674ece23e1c3da1155bb1ab5ebc2\"" Dec 13 01:55:22.292056 kubelet[2228]: E1213 01:55:22.292035 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:22.293879 env[1318]: time="2024-12-13T01:55:22.293468304Z" level=info msg="CreateContainer within sandbox \"27d5aa674c7c435332c93682097b42a35d7d674ece23e1c3da1155bb1ab5ebc2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:55:22.351600 env[1318]: time="2024-12-13T01:55:22.351535350Z" level=info msg="CreateContainer within sandbox \"27d5aa674c7c435332c93682097b42a35d7d674ece23e1c3da1155bb1ab5ebc2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4bdb09f4d9c41ca0e9a3bd4aa06b895a4991aac39ce6885c9fad09cf27a8825a\"" Dec 13 01:55:22.352239 env[1318]: time="2024-12-13T01:55:22.352082245Z" level=info msg="StartContainer for \"4bdb09f4d9c41ca0e9a3bd4aa06b895a4991aac39ce6885c9fad09cf27a8825a\"" Dec 13 01:55:22.493298 env[1318]: time="2024-12-13T01:55:22.493119262Z" level=info msg="StartContainer for \"4bdb09f4d9c41ca0e9a3bd4aa06b895a4991aac39ce6885c9fad09cf27a8825a\" returns successfully" Dec 13 01:55:22.529733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bdb09f4d9c41ca0e9a3bd4aa06b895a4991aac39ce6885c9fad09cf27a8825a-rootfs.mount: Deactivated successfully. Dec 13 01:55:22.566826 kubelet[2228]: I1213 01:55:22.566796 2228 scope.go:117] "RemoveContainer" containerID="bdc04db3be08c65cb1a400d47f49237c119feda42fe54f1f131c06b0d380d7ec" Dec 13 01:55:22.571513 env[1318]: time="2024-12-13T01:55:22.571467568Z" level=info msg="RemoveContainer for \"bdc04db3be08c65cb1a400d47f49237c119feda42fe54f1f131c06b0d380d7ec\"" Dec 13 01:55:22.573130 kubelet[2228]: E1213 01:55:22.573101 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:22.660741 env[1318]: time="2024-12-13T01:55:22.660683219Z" level=info msg="shim disconnected" id=4bdb09f4d9c41ca0e9a3bd4aa06b895a4991aac39ce6885c9fad09cf27a8825a Dec 13 01:55:22.660741 env[1318]: time="2024-12-13T01:55:22.660750066Z" level=warning msg="cleaning up after shim disconnected" id=4bdb09f4d9c41ca0e9a3bd4aa06b895a4991aac39ce6885c9fad09cf27a8825a namespace=k8s.io Dec 13 01:55:22.660940 env[1318]: time="2024-12-13T01:55:22.660759273Z" level=info msg="cleaning up dead shim" Dec 13 01:55:22.663951 env[1318]: time="2024-12-13T01:55:22.663896992Z" level=info msg="RemoveContainer for \"bdc04db3be08c65cb1a400d47f49237c119feda42fe54f1f131c06b0d380d7ec\" returns successfully" Dec 13 01:55:22.665445 kubelet[2228]: I1213 01:55:22.665402 2228 scope.go:117] "RemoveContainer" containerID="185971a1ca55da444e9944bf75667ec7e77df5385d68c6b258d3ad1822966fcc" Dec 13 01:55:22.666618 env[1318]: time="2024-12-13T01:55:22.666573088Z" level=info msg="RemoveContainer for \"185971a1ca55da444e9944bf75667ec7e77df5385d68c6b258d3ad1822966fcc\"" Dec 13 01:55:22.672323 env[1318]: time="2024-12-13T01:55:22.671378094Z" level=info msg="RemoveContainer for \"185971a1ca55da444e9944bf75667ec7e77df5385d68c6b258d3ad1822966fcc\" returns successfully" Dec 13 01:55:22.672479 kubelet[2228]: I1213 01:55:22.671608 2228 scope.go:117] "RemoveContainer" containerID="262ba1a55ecdc38ae02185a6a149dbf708e03b5f6a6cf08307dbde097646d480" Dec 13 01:55:22.672632 env[1318]: time="2024-12-13T01:55:22.672615619Z" level=info msg="RemoveContainer for \"262ba1a55ecdc38ae02185a6a149dbf708e03b5f6a6cf08307dbde097646d480\"" Dec 13 01:55:22.675568 env[1318]: time="2024-12-13T01:55:22.675529810Z" level=info msg="RemoveContainer for \"262ba1a55ecdc38ae02185a6a149dbf708e03b5f6a6cf08307dbde097646d480\" returns successfully" Dec 13 01:55:22.677633 env[1318]: time="2024-12-13T01:55:22.677575250Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:55:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4304 runtime=io.containerd.runc.v2\n" Dec 13 01:55:23.398947 kubelet[2228]: I1213 01:55:23.398897 2228 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1700d6cc-17fc-42bf-b164-298c2c341d88" path="/var/lib/kubelet/pods/1700d6cc-17fc-42bf-b164-298c2c341d88/volumes" Dec 13 01:55:23.575426 kubelet[2228]: E1213 01:55:23.575398 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:23.577485 env[1318]: time="2024-12-13T01:55:23.577434944Z" level=info msg="CreateContainer within sandbox \"27d5aa674c7c435332c93682097b42a35d7d674ece23e1c3da1155bb1ab5ebc2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:55:23.629877 env[1318]: time="2024-12-13T01:55:23.629825530Z" level=info msg="CreateContainer within sandbox \"27d5aa674c7c435332c93682097b42a35d7d674ece23e1c3da1155bb1ab5ebc2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8659912aa0026eb345ef76c1a958b7cf348f76e560a90d4095a2217af4a43461\"" Dec 13 01:55:23.630449 env[1318]: time="2024-12-13T01:55:23.630418893Z" level=info msg="StartContainer for \"8659912aa0026eb345ef76c1a958b7cf348f76e560a90d4095a2217af4a43461\"" Dec 13 01:55:23.678936 env[1318]: time="2024-12-13T01:55:23.678813134Z" level=info msg="StartContainer for \"8659912aa0026eb345ef76c1a958b7cf348f76e560a90d4095a2217af4a43461\" returns successfully" Dec 13 01:55:24.165741 env[1318]: time="2024-12-13T01:55:24.165690748Z" level=info msg="shim disconnected" id=8659912aa0026eb345ef76c1a958b7cf348f76e560a90d4095a2217af4a43461 Dec 13 01:55:24.165741 env[1318]: time="2024-12-13T01:55:24.165739582Z" level=warning msg="cleaning up after shim disconnected" id=8659912aa0026eb345ef76c1a958b7cf348f76e560a90d4095a2217af4a43461 namespace=k8s.io Dec 13 01:55:24.165741 env[1318]: time="2024-12-13T01:55:24.165747707Z" level=info msg="cleaning up dead shim" Dec 13 01:55:24.173778 systemd[1]: Started sshd@19-10.0.0.88:22-10.0.0.1:52466.service. Dec 13 01:55:24.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.88:22-10.0.0.1:52466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:24.175919 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 01:55:24.175985 kernel: audit: type=1130 audit(1734054924.173:382): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.88:22-10.0.0.1:52466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:24.183938 env[1318]: time="2024-12-13T01:55:24.183886420Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:55:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4365 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T01:55:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Dec 13 01:55:24.209000 audit[4378]: USER_ACCT pid=4378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:24.210123 sshd[4378]: Accepted publickey for core from 10.0.0.1 port 52466 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:55:24.211839 sshd[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:24.211000 audit[4378]: CRED_ACQ pid=4378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:24.216659 systemd[1]: Started session-20.scope. Dec 13 01:55:24.216936 systemd-logind[1304]: New session 20 of user core. Dec 13 01:55:24.219258 kernel: audit: type=1101 audit(1734054924.209:383): pid=4378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:24.219337 kernel: audit: type=1103 audit(1734054924.211:384): pid=4378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:24.219377 kernel: audit: type=1006 audit(1734054924.211:385): pid=4378 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 13 01:55:24.211000 audit[4378]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe19ab5e50 a2=3 a3=0 items=0 ppid=1 pid=4378 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:24.226219 kernel: audit: type=1300 audit(1734054924.211:385): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe19ab5e50 a2=3 a3=0 items=0 ppid=1 pid=4378 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:24.226299 kernel: audit: type=1327 audit(1734054924.211:385): proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:24.211000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:24.221000 audit[4378]: USER_START pid=4378 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:24.232047 kernel: audit: type=1105 audit(1734054924.221:386): pid=4378 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:24.232094 kernel: audit: type=1103 audit(1734054924.222:387): pid=4382 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:24.222000 audit[4382]: CRED_ACQ pid=4382 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:24.371094 sshd[4378]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:24.380688 kernel: audit: type=1106 audit(1734054924.371:388): pid=4378 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:24.380779 kernel: audit: type=1104 audit(1734054924.371:389): pid=4378 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:24.371000 audit[4378]: USER_END pid=4378 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:24.371000 audit[4378]: CRED_DISP pid=4378 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:24.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.88:22-10.0.0.1:52466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:24.373457 systemd[1]: sshd@19-10.0.0.88:22-10.0.0.1:52466.service: Deactivated successfully. Dec 13 01:55:24.374368 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:55:24.375417 systemd-logind[1304]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:55:24.376151 systemd-logind[1304]: Removed session 20. Dec 13 01:55:24.396725 env[1318]: time="2024-12-13T01:55:24.396687387Z" level=info msg="StopPodSandbox for \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\"" Dec 13 01:55:24.396956 kubelet[2228]: E1213 01:55:24.396931 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:24.420676 env[1318]: time="2024-12-13T01:55:24.420531857Z" level=error msg="StopPodSandbox for \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\" failed" error="failed to destroy network for sandbox \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:24.420872 kubelet[2228]: E1213 01:55:24.420832 2228 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" Dec 13 01:55:24.420957 kubelet[2228]: E1213 01:55:24.420888 2228 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5"} Dec 13 01:55:24.420957 kubelet[2228]: E1213 01:55:24.420926 2228 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03656298-6b0b-422b-a3a9-1c9ae4e861d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:24.420957 kubelet[2228]: E1213 01:55:24.420952 2228 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03656298-6b0b-422b-a3a9-1c9ae4e861d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qc46k" podUID="03656298-6b0b-422b-a3a9-1c9ae4e861d5" Dec 13 01:55:24.580212 kubelet[2228]: E1213 01:55:24.580174 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:24.586398 env[1318]: time="2024-12-13T01:55:24.586351204Z" level=info msg="CreateContainer within sandbox \"27d5aa674c7c435332c93682097b42a35d7d674ece23e1c3da1155bb1ab5ebc2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:55:24.598378 env[1318]: time="2024-12-13T01:55:24.598308011Z" level=info msg="CreateContainer within sandbox \"27d5aa674c7c435332c93682097b42a35d7d674ece23e1c3da1155bb1ab5ebc2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7829c9053fd327d3818f7c52be0ca550788aba2d120a36c3733d20e69a40607f\"" Dec 13 01:55:24.599679 env[1318]: time="2024-12-13T01:55:24.598882467Z" level=info msg="StartContainer for \"7829c9053fd327d3818f7c52be0ca550788aba2d120a36c3733d20e69a40607f\"" Dec 13 01:55:24.628829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8659912aa0026eb345ef76c1a958b7cf348f76e560a90d4095a2217af4a43461-rootfs.mount: Deactivated successfully. Dec 13 01:55:24.643748 env[1318]: time="2024-12-13T01:55:24.643708290Z" level=info msg="StartContainer for \"7829c9053fd327d3818f7c52be0ca550788aba2d120a36c3733d20e69a40607f\" returns successfully" Dec 13 01:55:25.396607 env[1318]: time="2024-12-13T01:55:25.396559498Z" level=info msg="StopPodSandbox for \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\"" Dec 13 01:55:25.467939 env[1318]: 2024-12-13 01:55:25.438 [INFO][4495] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" Dec 13 01:55:25.467939 env[1318]: 2024-12-13 01:55:25.438 [INFO][4495] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" iface="eth0" netns="/var/run/netns/cni-80319c9e-5de8-3ee8-902f-e57e9ddf7ab1" Dec 13 01:55:25.467939 env[1318]: 2024-12-13 01:55:25.438 [INFO][4495] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" iface="eth0" netns="/var/run/netns/cni-80319c9e-5de8-3ee8-902f-e57e9ddf7ab1" Dec 13 01:55:25.467939 env[1318]: 2024-12-13 01:55:25.438 [INFO][4495] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" iface="eth0" netns="/var/run/netns/cni-80319c9e-5de8-3ee8-902f-e57e9ddf7ab1" Dec 13 01:55:25.467939 env[1318]: 2024-12-13 01:55:25.438 [INFO][4495] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" Dec 13 01:55:25.467939 env[1318]: 2024-12-13 01:55:25.438 [INFO][4495] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" Dec 13 01:55:25.467939 env[1318]: 2024-12-13 01:55:25.457 [INFO][4503] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" HandleID="k8s-pod-network.de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" Workload="localhost-k8s-calico--apiserver--7f458bd975--jfg7r-eth0" Dec 13 01:55:25.467939 env[1318]: 2024-12-13 01:55:25.458 [INFO][4503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:25.467939 env[1318]: 2024-12-13 01:55:25.458 [INFO][4503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:25.467939 env[1318]: 2024-12-13 01:55:25.462 [WARNING][4503] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" HandleID="k8s-pod-network.de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" Workload="localhost-k8s-calico--apiserver--7f458bd975--jfg7r-eth0" Dec 13 01:55:25.467939 env[1318]: 2024-12-13 01:55:25.462 [INFO][4503] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" HandleID="k8s-pod-network.de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" Workload="localhost-k8s-calico--apiserver--7f458bd975--jfg7r-eth0" Dec 13 01:55:25.467939 env[1318]: 2024-12-13 01:55:25.464 [INFO][4503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:25.467939 env[1318]: 2024-12-13 01:55:25.465 [INFO][4495] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3" Dec 13 01:55:25.468463 env[1318]: time="2024-12-13T01:55:25.468066698Z" level=info msg="TearDown network for sandbox \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\" successfully" Dec 13 01:55:25.468463 env[1318]: time="2024-12-13T01:55:25.468106485Z" level=info msg="StopPodSandbox for \"de27f0858744a735aa9a877e0c362d23affbf16a2dc9f08c1d4c00bb4d7ccbf3\" returns successfully" Dec 13 01:55:25.472302 env[1318]: time="2024-12-13T01:55:25.472215619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f458bd975-jfg7r,Uid:2e3c9f76-8cdf-4757-acdc-92eda3454b96,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:55:25.473034 systemd[1]: run-netns-cni\x2d80319c9e\x2d5de8\x2d3ee8\x2d902f\x2de57e9ddf7ab1.mount: Deactivated successfully. Dec 13 01:55:25.584206 kubelet[2228]: E1213 01:55:25.584105 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:25.596990 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:55:25.597088 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calieedbd1b2a99: link becomes ready Dec 13 01:55:25.599539 systemd-networkd[1092]: calieedbd1b2a99: Link UP Dec 13 01:55:25.599696 systemd-networkd[1092]: calieedbd1b2a99: Gained carrier Dec 13 01:55:25.600156 systemd[1]: run-containerd-runc-k8s.io-7829c9053fd327d3818f7c52be0ca550788aba2d120a36c3733d20e69a40607f-runc.WbEyRq.mount: Deactivated successfully. Dec 13 01:55:25.608712 kubelet[2228]: I1213 01:55:25.608672 2228 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-xr9fc" podStartSLOduration=4.608630982 podStartE2EDuration="4.608630982s" podCreationTimestamp="2024-12-13 01:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:25.60607346 +0000 UTC m=+84.345258131" watchObservedRunningTime="2024-12-13 01:55:25.608630982 +0000 UTC m=+84.347815663" Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.507 [INFO][4510] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.516 [INFO][4510] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f458bd975--jfg7r-eth0 calico-apiserver-7f458bd975- calico-apiserver 2e3c9f76-8cdf-4757-acdc-92eda3454b96 1080 0 2024-12-13 01:54:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f458bd975 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f458bd975-jfg7r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieedbd1b2a99 [] []}} ContainerID="501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" Namespace="calico-apiserver" Pod="calico-apiserver-7f458bd975-jfg7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f458bd975--jfg7r-" Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.516 [INFO][4510] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" Namespace="calico-apiserver" Pod="calico-apiserver-7f458bd975-jfg7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f458bd975--jfg7r-eth0" Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.555 [INFO][4523] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" HandleID="k8s-pod-network.501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" Workload="localhost-k8s-calico--apiserver--7f458bd975--jfg7r-eth0" Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.563 [INFO][4523] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" HandleID="k8s-pod-network.501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" Workload="localhost-k8s-calico--apiserver--7f458bd975--jfg7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd0a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f458bd975-jfg7r", "timestamp":"2024-12-13 01:55:25.555472901 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.563 [INFO][4523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.563 [INFO][4523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.564 [INFO][4523] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.565 [INFO][4523] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" host="localhost" Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.568 [INFO][4523] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.571 [INFO][4523] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.572 [INFO][4523] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.574 [INFO][4523] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.574 [INFO][4523] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" host="localhost" Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.575 [INFO][4523] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198 Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.578 [INFO][4523] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" host="localhost" Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.581 [INFO][4523] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" host="localhost" Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.582 [INFO][4523] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" host="localhost" Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.582 [INFO][4523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:25.620465 env[1318]: 2024-12-13 01:55:25.582 [INFO][4523] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" HandleID="k8s-pod-network.501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" Workload="localhost-k8s-calico--apiserver--7f458bd975--jfg7r-eth0" Dec 13 01:55:25.622111 env[1318]: 2024-12-13 01:55:25.588 [INFO][4510] cni-plugin/k8s.go 386: Populated endpoint ContainerID="501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" Namespace="calico-apiserver" Pod="calico-apiserver-7f458bd975-jfg7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f458bd975--jfg7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f458bd975--jfg7r-eth0", GenerateName:"calico-apiserver-7f458bd975-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e3c9f76-8cdf-4757-acdc-92eda3454b96", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f458bd975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f458bd975-jfg7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieedbd1b2a99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:25.622111 env[1318]: 2024-12-13 01:55:25.588 [INFO][4510] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" Namespace="calico-apiserver" Pod="calico-apiserver-7f458bd975-jfg7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f458bd975--jfg7r-eth0" Dec 13 01:55:25.622111 env[1318]: 2024-12-13 01:55:25.588 [INFO][4510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieedbd1b2a99 ContainerID="501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" Namespace="calico-apiserver" Pod="calico-apiserver-7f458bd975-jfg7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f458bd975--jfg7r-eth0" Dec 13 01:55:25.622111 env[1318]: 2024-12-13 01:55:25.597 [INFO][4510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" Namespace="calico-apiserver" Pod="calico-apiserver-7f458bd975-jfg7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f458bd975--jfg7r-eth0" Dec 13 01:55:25.622111 env[1318]: 2024-12-13 01:55:25.597 [INFO][4510] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" Namespace="calico-apiserver" Pod="calico-apiserver-7f458bd975-jfg7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f458bd975--jfg7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f458bd975--jfg7r-eth0", GenerateName:"calico-apiserver-7f458bd975-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e3c9f76-8cdf-4757-acdc-92eda3454b96", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f458bd975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198", Pod:"calico-apiserver-7f458bd975-jfg7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieedbd1b2a99", MAC:"96:02:8c:b3:e0:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:25.622111 env[1318]: 2024-12-13 01:55:25.614 [INFO][4510] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198" Namespace="calico-apiserver" Pod="calico-apiserver-7f458bd975-jfg7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f458bd975--jfg7r-eth0" Dec 13 01:55:25.632480 env[1318]: time="2024-12-13T01:55:25.632395669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:25.632480 env[1318]: time="2024-12-13T01:55:25.632474069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:25.632637 env[1318]: time="2024-12-13T01:55:25.632495781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:25.632679 env[1318]: time="2024-12-13T01:55:25.632649725Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198 pid=4574 runtime=io.containerd.runc.v2 Dec 13 01:55:25.655457 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:55:25.677617 env[1318]: time="2024-12-13T01:55:25.677566493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f458bd975-jfg7r,Uid:2e3c9f76-8cdf-4757-acdc-92eda3454b96,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198\"" Dec 13 01:55:25.679302 env[1318]: time="2024-12-13T01:55:25.679088218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:55:25.966000 audit[4647]: AVC avc: denied { write } for pid=4647 comm="tee" name="fd" dev="proc" ino=27224 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 01:55:25.966000 audit[4647]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffde5a25a2a a2=241 a3=1b6 items=1 ppid=4618 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:25.966000 audit: CWD cwd="/etc/service/enabled/felix/log" Dec 13 01:55:25.966000 audit: PATH item=0 name="/dev/fd/63" inode=26359 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:55:25.966000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 01:55:25.970000 audit[4653]: AVC avc: denied { write } for pid=4653 comm="tee" name="fd" dev="proc" ino=27228 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 01:55:25.970000 audit[4653]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcccbdea2b a2=241 a3=1b6 items=1 ppid=4614 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:25.970000 audit: CWD cwd="/etc/service/enabled/bird/log" Dec 13 01:55:25.970000 audit: PATH item=0 name="/dev/fd/63" inode=26362 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:55:25.970000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 01:55:25.993000 audit[4644]: AVC avc: denied { write } for pid=4644 comm="tee" name="fd" dev="proc" ino=28921 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 01:55:25.993000 audit[4644]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcd884ca1b a2=241 a3=1b6 items=1 ppid=4624 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:25.993000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Dec 13 01:55:25.993000 audit: PATH item=0 name="/dev/fd/63" inode=26352 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:55:25.993000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 01:55:25.997000 audit[4685]: AVC avc: denied { write } for pid=4685 comm="tee" name="fd" dev="proc" ino=26368 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 01:55:25.998000 audit[4678]: AVC avc: denied { write } for pid=4678 comm="tee" name="fd" dev="proc" ino=28077 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 01:55:25.997000 audit[4685]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd53e18a2c a2=241 a3=1b6 items=1 ppid=4619 pid=4685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:25.997000 audit: CWD cwd="/etc/service/enabled/cni/log" Dec 13 01:55:25.997000 audit: PATH item=0 name="/dev/fd/63" inode=28074 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:55:25.997000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 01:55:25.998000 audit[4678]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdf4b3aa2a a2=241 a3=1b6 items=1 ppid=4617 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:25.998000 audit: CWD cwd="/etc/service/enabled/bird6/log" Dec 13 01:55:25.998000 audit: PATH item=0 name="/dev/fd/63" inode=28073 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:55:25.998000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 01:55:26.015000 audit[4694]: AVC avc: denied { write } for pid=4694 comm="tee" name="fd" dev="proc" ino=27238 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 01:55:26.015000 audit[4694]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff4db4ca2a a2=241 a3=1b6 items=1 ppid=4630 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.015000 audit: CWD cwd="/etc/service/enabled/confd/log" Dec 13 01:55:26.015000 audit: PATH item=0 name="/dev/fd/63" inode=27235 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:55:26.015000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 01:55:26.017000 audit[4697]: AVC avc: denied { write } for pid=4697 comm="tee" name="fd" dev="proc" ino=27245 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 01:55:26.017000 audit[4697]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc078e0a1a a2=241 a3=1b6 items=1 ppid=4626 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.017000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Dec 13 01:55:26.017000 audit: PATH item=0 name="/dev/fd/63" inode=27242 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:55:26.017000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 01:55:26.087000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.087000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.087000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.087000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.087000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.087000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.087000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.087000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.087000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.087000 audit: BPF prog-id=10 op=LOAD Dec 13 01:55:26.087000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff369089a0 a2=98 a3=3 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.087000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.087000 audit: BPF prog-id=10 op=UNLOAD Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit: BPF prog-id=11 op=LOAD Dec 13 01:55:26.088000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff36908780 a2=74 a3=540051 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.088000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.088000 audit: BPF prog-id=11 op=UNLOAD Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.088000 audit: BPF prog-id=12 op=LOAD Dec 13 01:55:26.088000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff369087b0 a2=94 a3=2 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.088000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.088000 audit: BPF prog-id=12 op=UNLOAD Dec 13 01:55:26.216000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.216000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.216000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.216000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.216000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.216000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.216000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.216000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.216000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.216000 audit: BPF prog-id=13 op=LOAD Dec 13 01:55:26.216000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff36908670 a2=40 a3=1 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.216000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.216000 audit: BPF prog-id=13 op=UNLOAD Dec 13 01:55:26.216000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.216000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff36908740 a2=50 a3=7fff36908820 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.216000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff36908680 a2=28 a3=0 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff369086b0 a2=28 a3=0 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff369085c0 a2=28 a3=0 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff369086d0 a2=28 a3=0 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff369086b0 a2=28 a3=0 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff369086a0 a2=28 a3=0 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff369086d0 a2=28 a3=0 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff369086b0 a2=28 a3=0 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff369086d0 a2=28 a3=0 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff369086a0 a2=28 a3=0 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff36908710 a2=28 a3=0 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff369084c0 a2=50 a3=1 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit: BPF prog-id=14 op=LOAD Dec 13 01:55:26.223000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff369084c0 a2=94 a3=5 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.223000 audit: BPF prog-id=14 op=UNLOAD Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff36908570 a2=50 a3=1 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff36908690 a2=4 a3=38 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.223000 audit[4712]: AVC avc: denied { confidentiality } for pid=4712 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 01:55:26.223000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff369086e0 a2=94 a3=6 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { confidentiality } for pid=4712 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 01:55:26.224000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff36907e90 a2=94 a3=83 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.224000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { perfmon } for pid=4712 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { bpf } for pid=4712 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.224000 audit[4712]: AVC avc: denied { confidentiality } for pid=4712 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 01:55:26.224000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff36907e90 a2=94 a3=83 items=0 ppid=4620 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.224000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:55:26.230000 audit[4736]: AVC avc: denied { bpf } for pid=4736 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.230000 audit[4736]: AVC avc: denied { bpf } for pid=4736 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.230000 audit[4736]: AVC avc: denied { perfmon } for pid=4736 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.230000 audit[4736]: AVC avc: denied { perfmon } for pid=4736 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.230000 audit[4736]: AVC avc: denied { perfmon } for pid=4736 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.230000 audit[4736]: AVC avc: denied { perfmon } for pid=4736 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.230000 audit[4736]: AVC avc: denied { perfmon } for pid=4736 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.230000 audit[4736]: AVC avc: denied { bpf } for pid=4736 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.230000 audit[4736]: AVC avc: denied { bpf } for pid=4736 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.230000 audit: BPF prog-id=15 op=LOAD Dec 13 01:55:26.230000 audit[4736]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc2f3b2a90 a2=98 a3=1999999999999999 items=0 ppid=4620 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.230000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 01:55:26.231000 audit: BPF prog-id=15 op=UNLOAD Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { bpf } for pid=4736 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { bpf } for pid=4736 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { perfmon } for pid=4736 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { perfmon } for pid=4736 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { perfmon } for pid=4736 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { perfmon } for pid=4736 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { perfmon } for pid=4736 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { bpf } for pid=4736 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { bpf } for pid=4736 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit: BPF prog-id=16 op=LOAD Dec 13 01:55:26.231000 audit[4736]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc2f3b2970 a2=74 a3=ffff items=0 ppid=4620 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.231000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 01:55:26.231000 audit: BPF prog-id=16 op=UNLOAD Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { bpf } for pid=4736 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { bpf } for pid=4736 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { perfmon } for pid=4736 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { perfmon } for pid=4736 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { perfmon } for pid=4736 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { perfmon } for pid=4736 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { perfmon } for pid=4736 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { bpf } for pid=4736 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit[4736]: AVC avc: denied { bpf } for pid=4736 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.231000 audit: BPF prog-id=17 op=LOAD Dec 13 01:55:26.231000 audit[4736]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc2f3b29b0 a2=40 a3=7ffc2f3b2b90 items=0 ppid=4620 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.231000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 01:55:26.231000 audit: BPF prog-id=17 op=UNLOAD Dec 13 01:55:26.266526 systemd-networkd[1092]: vxlan.calico: Link UP Dec 13 01:55:26.266532 systemd-networkd[1092]: vxlan.calico: Gained carrier Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit: BPF prog-id=18 op=LOAD Dec 13 01:55:26.277000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8b2a39e0 a2=98 a3=ffffffff items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.277000 audit: BPF prog-id=18 op=UNLOAD Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit: BPF prog-id=19 op=LOAD Dec 13 01:55:26.277000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8b2a37f0 a2=74 a3=540051 items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.277000 audit: BPF prog-id=19 op=UNLOAD Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit: BPF prog-id=20 op=LOAD Dec 13 01:55:26.277000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8b2a3820 a2=94 a3=2 items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.277000 audit: BPF prog-id=20 op=UNLOAD Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd8b2a36f0 a2=28 a3=0 items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd8b2a3720 a2=28 a3=0 items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd8b2a3630 a2=28 a3=0 items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd8b2a3740 a2=28 a3=0 items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd8b2a3720 a2=28 a3=0 items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.277000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.277000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd8b2a3710 a2=28 a3=0 items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd8b2a3740 a2=28 a3=0 items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.281000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd8b2a3720 a2=28 a3=0 items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.281000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd8b2a3740 a2=28 a3=0 items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.281000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd8b2a3710 a2=28 a3=0 items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.281000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd8b2a3780 a2=28 a3=0 items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.281000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit: BPF prog-id=21 op=LOAD Dec 13 01:55:26.281000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd8b2a35f0 a2=40 a3=0 items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.281000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.281000 audit: BPF prog-id=21 op=UNLOAD Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffd8b2a35e0 a2=50 a3=2800 items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.281000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffd8b2a35e0 a2=50 a3=2800 items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.281000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit: BPF prog-id=22 op=LOAD Dec 13 01:55:26.281000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd8b2a2e00 a2=94 a3=2 items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.281000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.281000 audit: BPF prog-id=22 op=UNLOAD Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { perfmon } for pid=4761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit[4761]: AVC avc: denied { bpf } for pid=4761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.281000 audit: BPF prog-id=23 op=LOAD Dec 13 01:55:26.281000 audit[4761]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd8b2a2f00 a2=94 a3=2d items=0 ppid=4620 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.281000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:55:26.285000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.285000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.285000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.285000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.285000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.285000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.285000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.285000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.285000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.285000 audit: BPF prog-id=24 op=LOAD Dec 13 01:55:26.285000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd78e9c3d0 a2=98 a3=0 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.285000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.286000 audit: BPF prog-id=24 op=UNLOAD Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit: BPF prog-id=25 op=LOAD Dec 13 01:55:26.286000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd78e9c1b0 a2=74 a3=540051 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.286000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.286000 audit: BPF prog-id=25 op=UNLOAD Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.286000 audit: BPF prog-id=26 op=LOAD Dec 13 01:55:26.286000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd78e9c1e0 a2=94 a3=2 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.286000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.286000 audit: BPF prog-id=26 op=UNLOAD Dec 13 01:55:26.390000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.390000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.390000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.390000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.390000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.390000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.390000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.390000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.390000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.390000 audit: BPF prog-id=27 op=LOAD Dec 13 01:55:26.390000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd78e9c0a0 a2=40 a3=1 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.390000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.390000 audit: BPF prog-id=27 op=UNLOAD Dec 13 01:55:26.390000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.390000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd78e9c170 a2=50 a3=7ffd78e9c250 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.390000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd78e9c0b0 a2=28 a3=0 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd78e9c0e0 a2=28 a3=0 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd78e9bff0 a2=28 a3=0 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd78e9c100 a2=28 a3=0 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd78e9c0e0 a2=28 a3=0 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd78e9c0d0 a2=28 a3=0 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd78e9c100 a2=28 a3=0 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd78e9c0e0 a2=28 a3=0 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd78e9c100 a2=28 a3=0 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd78e9c0d0 a2=28 a3=0 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd78e9c140 a2=28 a3=0 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd78e9bef0 a2=50 a3=1 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit: BPF prog-id=28 op=LOAD Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd78e9bef0 a2=94 a3=5 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit: BPF prog-id=28 op=UNLOAD Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd78e9bfa0 a2=50 a3=1 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffd78e9c0c0 a2=4 a3=38 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { confidentiality } for pid=4767 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd78e9c110 a2=94 a3=6 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { confidentiality } for pid=4767 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd78e9b8c0 a2=94 a3=83 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { perfmon } for pid=4767 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.398000 audit[4767]: AVC avc: denied { confidentiality } for pid=4767 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 01:55:26.398000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd78e9b8c0 a2=94 a3=83 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.398000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.399000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.399000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd78e9d300 a2=10 a3=208 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.399000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.399000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd78e9d1a0 a2=10 a3=3 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.399000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.399000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd78e9d140 a2=10 a3=3 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.399000 audit[4767]: AVC avc: denied { bpf } for pid=4767 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:55:26.399000 audit[4767]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd78e9d140 a2=10 a3=7 items=0 ppid=4620 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:55:26.406000 audit: BPF prog-id=23 op=UNLOAD Dec 13 01:55:26.449000 audit[4795]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=4795 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:55:26.449000 audit[4795]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffcd03da6f0 a2=0 a3=7ffcd03da6dc items=0 ppid=4620 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.449000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:55:26.452000 audit[4794]: NETFILTER_CFG table=nat:98 family=2 entries=15 op=nft_register_chain pid=4794 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:55:26.452000 audit[4794]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff693377d0 a2=0 a3=7fff693377bc items=0 ppid=4620 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.452000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:55:26.455000 audit[4798]: NETFILTER_CFG table=filter:99 family=2 entries=75 op=nft_register_chain pid=4798 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:55:26.455000 audit[4798]: SYSCALL arch=c000003e syscall=46 success=yes exit=40748 a0=3 a1=7ffe67fb0d30 a2=0 a3=7ffe67fb0d1c items=0 ppid=4620 pid=4798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.455000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:55:26.456000 audit[4793]: NETFILTER_CFG table=raw:100 family=2 entries=21 op=nft_register_chain pid=4793 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:55:26.456000 audit[4793]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffde63b5b30 a2=0 a3=7ffde63b5b1c items=0 ppid=4620 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:26.456000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:55:26.586686 kubelet[2228]: E1213 01:55:26.586642 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:27.301417 systemd-networkd[1092]: calieedbd1b2a99: Gained IPv6LL Dec 13 01:55:27.399362 env[1318]: time="2024-12-13T01:55:27.399246900Z" level=info msg="StopPodSandbox for \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\"" Dec 13 01:55:27.472130 env[1318]: 2024-12-13 01:55:27.442 [INFO][4841] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" Dec 13 01:55:27.472130 env[1318]: 2024-12-13 01:55:27.443 [INFO][4841] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" iface="eth0" netns="/var/run/netns/cni-f4100e09-0385-d738-5404-56ad08ce2377" Dec 13 01:55:27.472130 env[1318]: 2024-12-13 01:55:27.443 [INFO][4841] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" iface="eth0" netns="/var/run/netns/cni-f4100e09-0385-d738-5404-56ad08ce2377" Dec 13 01:55:27.472130 env[1318]: 2024-12-13 01:55:27.443 [INFO][4841] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" iface="eth0" netns="/var/run/netns/cni-f4100e09-0385-d738-5404-56ad08ce2377" Dec 13 01:55:27.472130 env[1318]: 2024-12-13 01:55:27.443 [INFO][4841] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" Dec 13 01:55:27.472130 env[1318]: 2024-12-13 01:55:27.443 [INFO][4841] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" Dec 13 01:55:27.472130 env[1318]: 2024-12-13 01:55:27.460 [INFO][4848] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" HandleID="k8s-pod-network.d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" Workload="localhost-k8s-coredns--76f75df574--q9qs2-eth0" Dec 13 01:55:27.472130 env[1318]: 2024-12-13 01:55:27.460 [INFO][4848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:27.472130 env[1318]: 2024-12-13 01:55:27.460 [INFO][4848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:27.472130 env[1318]: 2024-12-13 01:55:27.467 [WARNING][4848] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" HandleID="k8s-pod-network.d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" Workload="localhost-k8s-coredns--76f75df574--q9qs2-eth0" Dec 13 01:55:27.472130 env[1318]: 2024-12-13 01:55:27.467 [INFO][4848] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" HandleID="k8s-pod-network.d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" Workload="localhost-k8s-coredns--76f75df574--q9qs2-eth0" Dec 13 01:55:27.472130 env[1318]: 2024-12-13 01:55:27.469 [INFO][4848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:27.472130 env[1318]: 2024-12-13 01:55:27.470 [INFO][4841] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b" Dec 13 01:55:27.472659 env[1318]: time="2024-12-13T01:55:27.472315493Z" level=info msg="TearDown network for sandbox \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\" successfully" Dec 13 01:55:27.472659 env[1318]: time="2024-12-13T01:55:27.472351311Z" level=info msg="StopPodSandbox for \"d14a45e98a36bb41ee0a84b0d58f1e59bd30a64574164e329d010a8c81f84d4b\" returns successfully" Dec 13 01:55:27.475159 systemd[1]: run-netns-cni\x2df4100e09\x2d0385\x2dd738\x2d5404\x2d56ad08ce2377.mount: Deactivated successfully. Dec 13 01:55:27.476334 kubelet[2228]: E1213 01:55:27.476306 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:27.476719 env[1318]: time="2024-12-13T01:55:27.476677724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q9qs2,Uid:c24c6b5a-d5bf-438a-ad13-509ca76dd573,Namespace:kube-system,Attempt:1,}" Dec 13 01:55:27.660448 systemd-networkd[1092]: cali108e8a5bfde: Link UP Dec 13 01:55:27.662760 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:55:27.662859 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali108e8a5bfde: link becomes ready Dec 13 01:55:27.662992 systemd-networkd[1092]: cali108e8a5bfde: Gained carrier Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.593 [INFO][4856] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--q9qs2-eth0 coredns-76f75df574- kube-system c24c6b5a-d5bf-438a-ad13-509ca76dd573 1103 0 2024-12-13 01:54:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-q9qs2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali108e8a5bfde [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" Namespace="kube-system" Pod="coredns-76f75df574-q9qs2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q9qs2-" Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.593 [INFO][4856] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" Namespace="kube-system" Pod="coredns-76f75df574-q9qs2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q9qs2-eth0" Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.620 [INFO][4870] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" HandleID="k8s-pod-network.f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" Workload="localhost-k8s-coredns--76f75df574--q9qs2-eth0" Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.630 [INFO][4870] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" HandleID="k8s-pod-network.f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" Workload="localhost-k8s-coredns--76f75df574--q9qs2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019cdf0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-q9qs2", "timestamp":"2024-12-13 01:55:27.620198419 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.630 [INFO][4870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.630 [INFO][4870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.630 [INFO][4870] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.632 [INFO][4870] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" host="localhost" Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.636 [INFO][4870] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.639 [INFO][4870] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.641 [INFO][4870] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.643 [INFO][4870] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.643 [INFO][4870] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" host="localhost" Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.644 [INFO][4870] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64 Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.648 [INFO][4870] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" host="localhost" Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.655 [INFO][4870] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" host="localhost" Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.655 [INFO][4870] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" host="localhost" Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.655 [INFO][4870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:27.674394 env[1318]: 2024-12-13 01:55:27.655 [INFO][4870] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" HandleID="k8s-pod-network.f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" Workload="localhost-k8s-coredns--76f75df574--q9qs2-eth0" Dec 13 01:55:27.675224 env[1318]: 2024-12-13 01:55:27.658 [INFO][4856] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" Namespace="kube-system" Pod="coredns-76f75df574-q9qs2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q9qs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--q9qs2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c24c6b5a-d5bf-438a-ad13-509ca76dd573", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-q9qs2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali108e8a5bfde", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:27.675224 env[1318]: 2024-12-13 01:55:27.658 [INFO][4856] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" Namespace="kube-system" Pod="coredns-76f75df574-q9qs2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q9qs2-eth0" Dec 13 01:55:27.675224 env[1318]: 2024-12-13 01:55:27.658 [INFO][4856] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali108e8a5bfde ContainerID="f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" Namespace="kube-system" Pod="coredns-76f75df574-q9qs2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q9qs2-eth0" Dec 13 01:55:27.675224 env[1318]: 2024-12-13 01:55:27.662 [INFO][4856] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" Namespace="kube-system" Pod="coredns-76f75df574-q9qs2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q9qs2-eth0" Dec 13 01:55:27.675224 env[1318]: 2024-12-13 01:55:27.663 [INFO][4856] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" Namespace="kube-system" Pod="coredns-76f75df574-q9qs2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q9qs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--q9qs2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c24c6b5a-d5bf-438a-ad13-509ca76dd573", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64", Pod:"coredns-76f75df574-q9qs2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali108e8a5bfde", MAC:"ae:55:6a:da:1f:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:27.675224 env[1318]: 2024-12-13 01:55:27.671 [INFO][4856] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64" Namespace="kube-system" Pod="coredns-76f75df574-q9qs2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--q9qs2-eth0" Dec 13 01:55:27.682000 audit[4891]: NETFILTER_CFG table=filter:101 family=2 entries=38 op=nft_register_chain pid=4891 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:55:27.682000 audit[4891]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffd9d4e0460 a2=0 a3=7ffd9d4e044c items=0 ppid=4620 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:27.682000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:55:27.721883 env[1318]: time="2024-12-13T01:55:27.721809685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:27.721883 env[1318]: time="2024-12-13T01:55:27.721856864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:27.721883 env[1318]: time="2024-12-13T01:55:27.721869990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:27.722101 env[1318]: time="2024-12-13T01:55:27.722057568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64 pid=4899 runtime=io.containerd.runc.v2 Dec 13 01:55:27.743488 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:55:27.764936 env[1318]: time="2024-12-13T01:55:27.764876661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q9qs2,Uid:c24c6b5a-d5bf-438a-ad13-509ca76dd573,Namespace:kube-system,Attempt:1,} returns sandbox id \"f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64\"" Dec 13 01:55:27.765706 kubelet[2228]: E1213 01:55:27.765686 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:27.768157 env[1318]: time="2024-12-13T01:55:27.768112273Z" level=info msg="CreateContainer within sandbox \"f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:55:27.790812 env[1318]: time="2024-12-13T01:55:27.790747578Z" level=info msg="CreateContainer within sandbox \"f0aa0b819df1a67badc4cfd351df7c7956d41690e7c0c265e627d6803bef8d64\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da4eeb518c5625d5213927b536e7782b7acb76c2667cfeea89172ffbac5f4daf\"" Dec 13 01:55:27.791254 env[1318]: time="2024-12-13T01:55:27.791228035Z" level=info msg="StartContainer for \"da4eeb518c5625d5213927b536e7782b7acb76c2667cfeea89172ffbac5f4daf\"" Dec 13 01:55:27.842615 env[1318]: time="2024-12-13T01:55:27.842548515Z" level=info msg="StartContainer for \"da4eeb518c5625d5213927b536e7782b7acb76c2667cfeea89172ffbac5f4daf\" returns successfully" Dec 13 01:55:28.133479 systemd-networkd[1092]: vxlan.calico: Gained IPv6LL Dec 13 01:55:28.591979 kubelet[2228]: E1213 01:55:28.591613 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:28.603369 kubelet[2228]: I1213 01:55:28.603316 2228 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-q9qs2" podStartSLOduration=73.603235986 podStartE2EDuration="1m13.603235986s" podCreationTimestamp="2024-12-13 01:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:28.603078525 +0000 UTC m=+87.342263207" watchObservedRunningTime="2024-12-13 01:55:28.603235986 +0000 UTC m=+87.342420667" Dec 13 01:55:28.618000 audit[4973]: NETFILTER_CFG table=filter:102 family=2 entries=16 op=nft_register_rule pid=4973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:28.618000 audit[4973]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff4d885fd0 a2=0 a3=7fff4d885fbc items=0 ppid=2407 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:28.618000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:28.628000 audit[4973]: NETFILTER_CFG table=nat:103 family=2 entries=14 op=nft_register_rule pid=4973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:28.628000 audit[4973]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff4d885fd0 a2=0 a3=0 items=0 ppid=2407 pid=4973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:28.628000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:28.638000 audit[4975]: NETFILTER_CFG table=filter:104 family=2 entries=13 op=nft_register_rule pid=4975 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:28.638000 audit[4975]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffee4636170 a2=0 a3=7ffee463615c items=0 ppid=2407 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:28.638000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:28.644000 audit[4975]: NETFILTER_CFG table=nat:105 family=2 entries=35 op=nft_register_chain pid=4975 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:28.644000 audit[4975]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffee4636170 a2=0 a3=7ffee463615c items=0 ppid=2407 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:28.644000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:28.691149 env[1318]: time="2024-12-13T01:55:28.691087628Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:55:28.693130 env[1318]: time="2024-12-13T01:55:28.693093432Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:55:28.694888 env[1318]: time="2024-12-13T01:55:28.694866503Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:55:28.696390 env[1318]: time="2024-12-13T01:55:28.696330375Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:55:28.696877 env[1318]: time="2024-12-13T01:55:28.696838243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:55:28.698775 env[1318]: time="2024-12-13T01:55:28.698734288Z" level=info msg="CreateContainer within sandbox \"501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:55:28.711394 env[1318]: time="2024-12-13T01:55:28.711314895Z" level=info msg="CreateContainer within sandbox \"501d8d8bf4d09adc271330781c814ea728964805e5521826cadb6e1122c16198\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"31f261861e690d4fcf19dfa8b4191d92c70029397e3bf76fca4305703877fd52\"" Dec 13 01:55:28.711902 env[1318]: time="2024-12-13T01:55:28.711874332Z" level=info msg="StartContainer for \"31f261861e690d4fcf19dfa8b4191d92c70029397e3bf76fca4305703877fd52\"" Dec 13 01:55:29.157478 systemd-networkd[1092]: cali108e8a5bfde: Gained IPv6LL Dec 13 01:55:29.213593 env[1318]: time="2024-12-13T01:55:29.213507545Z" level=info msg="StartContainer for \"31f261861e690d4fcf19dfa8b4191d92c70029397e3bf76fca4305703877fd52\" returns successfully" Dec 13 01:55:29.373896 systemd[1]: Started sshd@20-10.0.0.88:22-10.0.0.1:46030.service. Dec 13 01:55:29.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.88:22-10.0.0.1:46030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:29.375195 kernel: kauditd_printk_skb: 531 callbacks suppressed Dec 13 01:55:29.375289 kernel: audit: type=1130 audit(1734054929.372:498): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.88:22-10.0.0.1:46030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:29.397571 env[1318]: time="2024-12-13T01:55:29.397524704Z" level=info msg="StopPodSandbox for \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\"" Dec 13 01:55:29.421000 audit[5014]: USER_ACCT pid=5014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:29.423028 sshd[5014]: Accepted publickey for core from 10.0.0.1 port 46030 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:55:29.427000 audit[5014]: CRED_ACQ pid=5014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:29.429242 sshd[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:29.434332 kernel: audit: type=1101 audit(1734054929.421:499): pid=5014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:29.434544 kernel: audit: type=1103 audit(1734054929.427:500): pid=5014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:29.443051 kernel: audit: type=1006 audit(1734054929.427:501): pid=5014 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 13 01:55:29.443138 kernel: audit: type=1300 audit(1734054929.427:501): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3d1f9120 a2=3 a3=0 items=0 ppid=1 pid=5014 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:29.427000 audit[5014]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3d1f9120 a2=3 a3=0 items=0 ppid=1 pid=5014 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:29.438546 systemd-logind[1304]: New session 21 of user core. Dec 13 01:55:29.439153 systemd[1]: Started session-21.scope. Dec 13 01:55:29.427000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:29.451173 kernel: audit: type=1327 audit(1734054929.427:501): proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:29.451225 kernel: audit: type=1105 audit(1734054929.443:502): pid=5014 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:29.443000 audit[5014]: USER_START pid=5014 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:29.445000 audit[5042]: CRED_ACQ pid=5042 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:29.458405 kernel: audit: type=1103 audit(1734054929.445:503): pid=5042 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:29.475552 systemd[1]: run-containerd-runc-k8s.io-31f261861e690d4fcf19dfa8b4191d92c70029397e3bf76fca4305703877fd52-runc.dhcZsY.mount: Deactivated successfully. Dec 13 01:55:29.521065 env[1318]: 2024-12-13 01:55:29.486 [INFO][5035] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" Dec 13 01:55:29.521065 env[1318]: 2024-12-13 01:55:29.486 [INFO][5035] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" iface="eth0" netns="/var/run/netns/cni-9182b37e-033b-d8cc-3f9b-5a61f27f86ba" Dec 13 01:55:29.521065 env[1318]: 2024-12-13 01:55:29.487 [INFO][5035] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" iface="eth0" netns="/var/run/netns/cni-9182b37e-033b-d8cc-3f9b-5a61f27f86ba" Dec 13 01:55:29.521065 env[1318]: 2024-12-13 01:55:29.487 [INFO][5035] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" iface="eth0" netns="/var/run/netns/cni-9182b37e-033b-d8cc-3f9b-5a61f27f86ba" Dec 13 01:55:29.521065 env[1318]: 2024-12-13 01:55:29.487 [INFO][5035] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" Dec 13 01:55:29.521065 env[1318]: 2024-12-13 01:55:29.487 [INFO][5035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" Dec 13 01:55:29.521065 env[1318]: 2024-12-13 01:55:29.510 [INFO][5045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" HandleID="k8s-pod-network.d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" Workload="localhost-k8s-calico--kube--controllers--5784655f99--jwwjd-eth0" Dec 13 01:55:29.521065 env[1318]: 2024-12-13 01:55:29.510 [INFO][5045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:29.521065 env[1318]: 2024-12-13 01:55:29.510 [INFO][5045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:29.521065 env[1318]: 2024-12-13 01:55:29.515 [WARNING][5045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" HandleID="k8s-pod-network.d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" Workload="localhost-k8s-calico--kube--controllers--5784655f99--jwwjd-eth0" Dec 13 01:55:29.521065 env[1318]: 2024-12-13 01:55:29.516 [INFO][5045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" HandleID="k8s-pod-network.d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" Workload="localhost-k8s-calico--kube--controllers--5784655f99--jwwjd-eth0" Dec 13 01:55:29.521065 env[1318]: 2024-12-13 01:55:29.517 [INFO][5045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:29.521065 env[1318]: 2024-12-13 01:55:29.518 [INFO][5035] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747" Dec 13 01:55:29.521539 env[1318]: time="2024-12-13T01:55:29.521197133Z" level=info msg="TearDown network for sandbox \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\" successfully" Dec 13 01:55:29.521539 env[1318]: time="2024-12-13T01:55:29.521226758Z" level=info msg="StopPodSandbox for \"d241956cf05abd984e806810ca093f07e60eb237497f4d5787dca8fac5352747\" returns successfully" Dec 13 01:55:29.521832 env[1318]: time="2024-12-13T01:55:29.521808097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5784655f99-jwwjd,Uid:210d3ffb-3280-4c87-8159-32ab42140bc1,Namespace:calico-system,Attempt:1,}" Dec 13 01:55:29.523746 systemd[1]: run-netns-cni\x2d9182b37e\x2d033b\x2dd8cc\x2d3f9b\x2d5a61f27f86ba.mount: Deactivated successfully. Dec 13 01:55:29.571584 sshd[5014]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:29.572000 audit[5014]: USER_END pid=5014 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:29.575421 systemd[1]: sshd@20-10.0.0.88:22-10.0.0.1:46030.service: Deactivated successfully. Dec 13 01:55:29.576413 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:55:29.576874 systemd-logind[1304]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:55:29.572000 audit[5014]: CRED_DISP pid=5014 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:29.577721 systemd-logind[1304]: Removed session 21. Dec 13 01:55:29.581515 kernel: audit: type=1106 audit(1734054929.572:504): pid=5014 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:29.581579 kernel: audit: type=1104 audit(1734054929.572:505): pid=5014 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:29.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.88:22-10.0.0.1:46030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:29.596165 kubelet[2228]: E1213 01:55:29.594929 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:29.606952 kubelet[2228]: I1213 01:55:29.606920 2228 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f458bd975-jfg7r" podStartSLOduration=66.588482565 podStartE2EDuration="1m9.606877651s" podCreationTimestamp="2024-12-13 01:54:20 +0000 UTC" firstStartedPulling="2024-12-13 01:55:25.678716177 +0000 UTC m=+84.417900859" lastFinishedPulling="2024-12-13 01:55:28.697111264 +0000 UTC m=+87.436295945" observedRunningTime="2024-12-13 01:55:29.606562681 +0000 UTC m=+88.345747362" watchObservedRunningTime="2024-12-13 01:55:29.606877651 +0000 UTC m=+88.346062332" Dec 13 01:55:29.620000 audit[5085]: NETFILTER_CFG table=filter:106 family=2 entries=10 op=nft_register_rule pid=5085 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:29.620000 audit[5085]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd9a676e00 a2=0 a3=7ffd9a676dec items=0 ppid=2407 pid=5085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:29.620000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:29.626000 audit[5085]: NETFILTER_CFG table=nat:107 family=2 entries=20 op=nft_register_rule pid=5085 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:29.626000 audit[5085]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd9a676e00 a2=0 a3=7ffd9a676dec items=0 ppid=2407 pid=5085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:29.626000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:29.668738 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:55:29.668841 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0ff4023deab: link becomes ready Dec 13 01:55:29.668480 systemd-networkd[1092]: cali0ff4023deab: Link UP Dec 13 01:55:29.669138 systemd-networkd[1092]: cali0ff4023deab: Gained carrier Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.576 [INFO][5061] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5784655f99--jwwjd-eth0 calico-kube-controllers-5784655f99- calico-system 210d3ffb-3280-4c87-8159-32ab42140bc1 1128 0 2024-12-13 01:54:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5784655f99 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5784655f99-jwwjd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0ff4023deab [] []}} ContainerID="cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" Namespace="calico-system" Pod="calico-kube-controllers-5784655f99-jwwjd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5784655f99--jwwjd-" Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.576 [INFO][5061] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" Namespace="calico-system" Pod="calico-kube-controllers-5784655f99-jwwjd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5784655f99--jwwjd-eth0" Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.611 [INFO][5076] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" HandleID="k8s-pod-network.cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" Workload="localhost-k8s-calico--kube--controllers--5784655f99--jwwjd-eth0" Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.618 [INFO][5076] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" HandleID="k8s-pod-network.cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" Workload="localhost-k8s-calico--kube--controllers--5784655f99--jwwjd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003acd80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5784655f99-jwwjd", "timestamp":"2024-12-13 01:55:29.611390092 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.618 [INFO][5076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.619 [INFO][5076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.619 [INFO][5076] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.621 [INFO][5076] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" host="localhost" Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.625 [INFO][5076] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.630 [INFO][5076] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.631 [INFO][5076] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.633 [INFO][5076] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.633 [INFO][5076] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" host="localhost" Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.634 [INFO][5076] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.642 [INFO][5076] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" host="localhost" Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.660 [INFO][5076] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" host="localhost" Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.660 [INFO][5076] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" host="localhost" Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.662 [INFO][5076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:29.708503 env[1318]: 2024-12-13 01:55:29.662 [INFO][5076] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" HandleID="k8s-pod-network.cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" Workload="localhost-k8s-calico--kube--controllers--5784655f99--jwwjd-eth0" Dec 13 01:55:29.709520 env[1318]: 2024-12-13 01:55:29.664 [INFO][5061] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" Namespace="calico-system" Pod="calico-kube-controllers-5784655f99-jwwjd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5784655f99--jwwjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5784655f99--jwwjd-eth0", GenerateName:"calico-kube-controllers-5784655f99-", Namespace:"calico-system", SelfLink:"", UID:"210d3ffb-3280-4c87-8159-32ab42140bc1", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5784655f99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5784655f99-jwwjd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0ff4023deab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:29.709520 env[1318]: 2024-12-13 01:55:29.665 [INFO][5061] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" Namespace="calico-system" Pod="calico-kube-controllers-5784655f99-jwwjd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5784655f99--jwwjd-eth0" Dec 13 01:55:29.709520 env[1318]: 2024-12-13 01:55:29.665 [INFO][5061] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ff4023deab ContainerID="cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" Namespace="calico-system" Pod="calico-kube-controllers-5784655f99-jwwjd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5784655f99--jwwjd-eth0" Dec 13 01:55:29.709520 env[1318]: 2024-12-13 01:55:29.669 [INFO][5061] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" Namespace="calico-system" Pod="calico-kube-controllers-5784655f99-jwwjd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5784655f99--jwwjd-eth0" Dec 13 01:55:29.709520 env[1318]: 2024-12-13 01:55:29.669 [INFO][5061] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" Namespace="calico-system" Pod="calico-kube-controllers-5784655f99-jwwjd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5784655f99--jwwjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5784655f99--jwwjd-eth0", GenerateName:"calico-kube-controllers-5784655f99-", Namespace:"calico-system", SelfLink:"", UID:"210d3ffb-3280-4c87-8159-32ab42140bc1", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5784655f99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c", Pod:"calico-kube-controllers-5784655f99-jwwjd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0ff4023deab", MAC:"7e:70:66:b9:a4:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:29.709520 env[1318]: 2024-12-13 01:55:29.706 [INFO][5061] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c" Namespace="calico-system" Pod="calico-kube-controllers-5784655f99-jwwjd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5784655f99--jwwjd-eth0" Dec 13 01:55:29.718000 audit[5100]: NETFILTER_CFG table=filter:108 family=2 entries=42 op=nft_register_chain pid=5100 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:55:29.718000 audit[5100]: SYSCALL arch=c000003e syscall=46 success=yes exit=21524 a0=3 a1=7ffe09c53900 a2=0 a3=7ffe09c538ec items=0 ppid=4620 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:29.718000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:55:29.726427 env[1318]: time="2024-12-13T01:55:29.726150364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:29.726427 env[1318]: time="2024-12-13T01:55:29.726186975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:29.726427 env[1318]: time="2024-12-13T01:55:29.726200630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:29.726617 env[1318]: time="2024-12-13T01:55:29.726458602Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c pid=5108 runtime=io.containerd.runc.v2 Dec 13 01:55:29.750400 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:55:29.776789 env[1318]: time="2024-12-13T01:55:29.776748820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5784655f99-jwwjd,Uid:210d3ffb-3280-4c87-8159-32ab42140bc1,Namespace:calico-system,Attempt:1,} returns sandbox id \"cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c\"" Dec 13 01:55:29.778046 env[1318]: time="2024-12-13T01:55:29.778025504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:55:30.397311 env[1318]: time="2024-12-13T01:55:30.397254690Z" level=info msg="StopPodSandbox for \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\"" Dec 13 01:55:30.464852 env[1318]: 2024-12-13 01:55:30.437 [INFO][5160] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" Dec 13 01:55:30.464852 env[1318]: 2024-12-13 01:55:30.437 [INFO][5160] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" iface="eth0" netns="/var/run/netns/cni-b3862cf3-0969-d3fb-8c15-c4c4a3622926" Dec 13 01:55:30.464852 env[1318]: 2024-12-13 01:55:30.437 [INFO][5160] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" iface="eth0" netns="/var/run/netns/cni-b3862cf3-0969-d3fb-8c15-c4c4a3622926" Dec 13 01:55:30.464852 env[1318]: 2024-12-13 01:55:30.437 [INFO][5160] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" iface="eth0" netns="/var/run/netns/cni-b3862cf3-0969-d3fb-8c15-c4c4a3622926" Dec 13 01:55:30.464852 env[1318]: 2024-12-13 01:55:30.438 [INFO][5160] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" Dec 13 01:55:30.464852 env[1318]: 2024-12-13 01:55:30.438 [INFO][5160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" Dec 13 01:55:30.464852 env[1318]: 2024-12-13 01:55:30.455 [INFO][5168] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" HandleID="k8s-pod-network.204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" Workload="localhost-k8s-csi--node--driver--t2vq9-eth0" Dec 13 01:55:30.464852 env[1318]: 2024-12-13 01:55:30.455 [INFO][5168] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:30.464852 env[1318]: 2024-12-13 01:55:30.455 [INFO][5168] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:30.464852 env[1318]: 2024-12-13 01:55:30.460 [WARNING][5168] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" HandleID="k8s-pod-network.204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" Workload="localhost-k8s-csi--node--driver--t2vq9-eth0" Dec 13 01:55:30.464852 env[1318]: 2024-12-13 01:55:30.460 [INFO][5168] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" HandleID="k8s-pod-network.204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" Workload="localhost-k8s-csi--node--driver--t2vq9-eth0" Dec 13 01:55:30.464852 env[1318]: 2024-12-13 01:55:30.461 [INFO][5168] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:30.464852 env[1318]: 2024-12-13 01:55:30.463 [INFO][5160] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef" Dec 13 01:55:30.465323 env[1318]: time="2024-12-13T01:55:30.464968078Z" level=info msg="TearDown network for sandbox \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\" successfully" Dec 13 01:55:30.465323 env[1318]: time="2024-12-13T01:55:30.464997893Z" level=info msg="StopPodSandbox for \"204a9031243c0ce524fe0b504dff0e65d445e8bca41f687b69bba2a5953d2aef\" returns successfully" Dec 13 01:55:30.465658 env[1318]: time="2024-12-13T01:55:30.465638664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t2vq9,Uid:7369f4a7-4a25-4cba-bc4e-08b9ad330777,Namespace:calico-system,Attempt:1,}" Dec 13 01:55:30.474786 systemd[1]: run-netns-cni\x2db3862cf3\x2d0969\x2dd3fb\x2d8c15\x2dc4c4a3622926.mount: Deactivated successfully. Dec 13 01:55:30.564897 systemd-networkd[1092]: calif5d5664eac5: Link UP Dec 13 01:55:30.566646 systemd-networkd[1092]: calif5d5664eac5: Gained carrier Dec 13 01:55:30.567294 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif5d5664eac5: link becomes ready Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.506 [INFO][5177] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--t2vq9-eth0 csi-node-driver- calico-system 7369f4a7-4a25-4cba-bc4e-08b9ad330777 1149 0 2024-12-13 01:54:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-t2vq9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif5d5664eac5 [] []}} ContainerID="3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" Namespace="calico-system" Pod="csi-node-driver-t2vq9" WorkloadEndpoint="localhost-k8s-csi--node--driver--t2vq9-" Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.507 [INFO][5177] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" Namespace="calico-system" Pod="csi-node-driver-t2vq9" WorkloadEndpoint="localhost-k8s-csi--node--driver--t2vq9-eth0" Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.532 [INFO][5190] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" HandleID="k8s-pod-network.3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" Workload="localhost-k8s-csi--node--driver--t2vq9-eth0" Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.539 [INFO][5190] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" HandleID="k8s-pod-network.3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" Workload="localhost-k8s-csi--node--driver--t2vq9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051db0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-t2vq9", "timestamp":"2024-12-13 01:55:30.532038579 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.539 [INFO][5190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.539 [INFO][5190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.539 [INFO][5190] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.540 [INFO][5190] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" host="localhost" Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.543 [INFO][5190] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.546 [INFO][5190] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.548 [INFO][5190] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.550 [INFO][5190] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.550 [INFO][5190] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" host="localhost" Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.551 [INFO][5190] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8 Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.555 [INFO][5190] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" host="localhost" Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.561 [INFO][5190] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" host="localhost" Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.561 [INFO][5190] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" host="localhost" Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.561 [INFO][5190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:30.577857 env[1318]: 2024-12-13 01:55:30.561 [INFO][5190] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" HandleID="k8s-pod-network.3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" Workload="localhost-k8s-csi--node--driver--t2vq9-eth0" Dec 13 01:55:30.578533 env[1318]: 2024-12-13 01:55:30.563 [INFO][5177] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" Namespace="calico-system" Pod="csi-node-driver-t2vq9" WorkloadEndpoint="localhost-k8s-csi--node--driver--t2vq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t2vq9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7369f4a7-4a25-4cba-bc4e-08b9ad330777", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-t2vq9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif5d5664eac5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:30.578533 env[1318]: 2024-12-13 01:55:30.563 [INFO][5177] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" Namespace="calico-system" Pod="csi-node-driver-t2vq9" WorkloadEndpoint="localhost-k8s-csi--node--driver--t2vq9-eth0" Dec 13 01:55:30.578533 env[1318]: 2024-12-13 01:55:30.563 [INFO][5177] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5d5664eac5 ContainerID="3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" Namespace="calico-system" Pod="csi-node-driver-t2vq9" WorkloadEndpoint="localhost-k8s-csi--node--driver--t2vq9-eth0" Dec 13 01:55:30.578533 env[1318]: 2024-12-13 01:55:30.566 [INFO][5177] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" Namespace="calico-system" Pod="csi-node-driver-t2vq9" WorkloadEndpoint="localhost-k8s-csi--node--driver--t2vq9-eth0" Dec 13 01:55:30.578533 env[1318]: 2024-12-13 01:55:30.567 [INFO][5177] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" Namespace="calico-system" Pod="csi-node-driver-t2vq9" WorkloadEndpoint="localhost-k8s-csi--node--driver--t2vq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t2vq9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7369f4a7-4a25-4cba-bc4e-08b9ad330777", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8", Pod:"csi-node-driver-t2vq9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif5d5664eac5", MAC:"16:15:fa:dd:57:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:30.578533 env[1318]: 2024-12-13 01:55:30.576 [INFO][5177] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8" Namespace="calico-system" Pod="csi-node-driver-t2vq9" WorkloadEndpoint="localhost-k8s-csi--node--driver--t2vq9-eth0" Dec 13 01:55:30.589092 env[1318]: time="2024-12-13T01:55:30.588999280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:30.589264 env[1318]: time="2024-12-13T01:55:30.589063492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:30.589264 env[1318]: time="2024-12-13T01:55:30.589104360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:30.589510 env[1318]: time="2024-12-13T01:55:30.589464686Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8 pid=5218 runtime=io.containerd.runc.v2 Dec 13 01:55:30.598074 kubelet[2228]: E1213 01:55:30.598051 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:30.588000 audit[5215]: NETFILTER_CFG table=filter:109 family=2 entries=42 op=nft_register_chain pid=5215 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:55:30.588000 audit[5215]: SYSCALL arch=c000003e syscall=46 success=yes exit=21016 a0=3 a1=7ffcf0c778e0 a2=0 a3=7ffcf0c778cc items=0 ppid=4620 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:30.588000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:55:30.615908 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:55:30.623517 env[1318]: time="2024-12-13T01:55:30.623469242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t2vq9,Uid:7369f4a7-4a25-4cba-bc4e-08b9ad330777,Namespace:calico-system,Attempt:1,} returns sandbox id \"3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8\"" Dec 13 01:55:30.652000 audit[5256]: NETFILTER_CFG table=filter:110 family=2 entries=9 op=nft_register_rule pid=5256 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:30.652000 audit[5256]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc8562b6b0 a2=0 a3=7ffc8562b69c items=0 ppid=2407 pid=5256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:30.652000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:30.662000 audit[5256]: NETFILTER_CFG table=nat:111 family=2 entries=27 op=nft_register_chain pid=5256 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:30.662000 audit[5256]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffc8562b6b0 a2=0 a3=7ffc8562b69c items=0 ppid=2407 pid=5256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:30.662000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:31.141500 systemd-networkd[1092]: cali0ff4023deab: Gained IPv6LL Dec 13 01:55:32.549480 systemd-networkd[1092]: calif5d5664eac5: Gained IPv6LL Dec 13 01:55:32.802073 env[1318]: time="2024-12-13T01:55:32.801965402Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:55:32.804306 env[1318]: time="2024-12-13T01:55:32.804277695Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:55:32.805798 env[1318]: time="2024-12-13T01:55:32.805769645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:55:32.808191 env[1318]: time="2024-12-13T01:55:32.807622643Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:55:32.808191 env[1318]: time="2024-12-13T01:55:32.807980093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:55:32.809054 env[1318]: time="2024-12-13T01:55:32.809018139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:55:32.820856 env[1318]: time="2024-12-13T01:55:32.820816671Z" level=info msg="CreateContainer within sandbox \"cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:55:32.834569 env[1318]: time="2024-12-13T01:55:32.834525400Z" level=info msg="CreateContainer within sandbox \"cf576388b3594ca5de3c65c85d827d3103ba6f23653fd612afdbebfd236a149c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"40a073ea4e1424cc2a43b9ffc4883cc24420b98717f5f3277474b27296b17913\"" Dec 13 01:55:32.835662 env[1318]: time="2024-12-13T01:55:32.835475259Z" level=info msg="StartContainer for \"40a073ea4e1424cc2a43b9ffc4883cc24420b98717f5f3277474b27296b17913\"" Dec 13 01:55:33.148643 env[1318]: time="2024-12-13T01:55:33.148545016Z" level=info msg="StartContainer for \"40a073ea4e1424cc2a43b9ffc4883cc24420b98717f5f3277474b27296b17913\" returns successfully" Dec 13 01:55:33.635019 kubelet[2228]: I1213 01:55:33.634975 2228 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5784655f99-jwwjd" podStartSLOduration=69.604435108 podStartE2EDuration="1m12.634935337s" podCreationTimestamp="2024-12-13 01:54:21 +0000 UTC" firstStartedPulling="2024-12-13 01:55:29.777761781 +0000 UTC m=+88.516946462" lastFinishedPulling="2024-12-13 01:55:32.80826199 +0000 UTC m=+91.547446691" observedRunningTime="2024-12-13 01:55:33.634725939 +0000 UTC m=+92.373910620" watchObservedRunningTime="2024-12-13 01:55:33.634935337 +0000 UTC m=+92.374120018" Dec 13 01:55:34.433467 env[1318]: time="2024-12-13T01:55:34.433411750Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:55:34.435460 env[1318]: time="2024-12-13T01:55:34.435414280Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:55:34.437023 env[1318]: time="2024-12-13T01:55:34.436970440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:55:34.438518 env[1318]: time="2024-12-13T01:55:34.438484090Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:55:34.438990 env[1318]: time="2024-12-13T01:55:34.438963512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:55:34.440604 env[1318]: time="2024-12-13T01:55:34.440577833Z" level=info msg="CreateContainer within sandbox \"3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:55:34.455492 env[1318]: time="2024-12-13T01:55:34.455440930Z" level=info msg="CreateContainer within sandbox \"3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e5fb01795b3a1b7673ef557ece83082a31185660198d75ddcfcfa58513690f45\"" Dec 13 01:55:34.456224 env[1318]: time="2024-12-13T01:55:34.456183543Z" level=info msg="StartContainer for \"e5fb01795b3a1b7673ef557ece83082a31185660198d75ddcfcfa58513690f45\"" Dec 13 01:55:34.505221 env[1318]: time="2024-12-13T01:55:34.505177886Z" level=info msg="StartContainer for \"e5fb01795b3a1b7673ef557ece83082a31185660198d75ddcfcfa58513690f45\" returns successfully" Dec 13 01:55:34.507279 env[1318]: time="2024-12-13T01:55:34.507230992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:55:34.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.88:22-10.0.0.1:46040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:34.573194 systemd[1]: Started sshd@21-10.0.0.88:22-10.0.0.1:46040.service. Dec 13 01:55:34.574744 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 13 01:55:34.574796 kernel: audit: type=1130 audit(1734054934.572:513): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.88:22-10.0.0.1:46040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:34.610000 audit[5355]: USER_ACCT pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:34.612023 sshd[5355]: Accepted publickey for core from 10.0.0.1 port 46040 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:55:34.615982 sshd[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:34.614000 audit[5355]: CRED_ACQ pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:34.620590 kernel: audit: type=1101 audit(1734054934.610:514): pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:34.620689 kernel: audit: type=1103 audit(1734054934.614:515): pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:34.620716 kernel: audit: type=1006 audit(1734054934.614:516): pid=5355 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 13 01:55:34.619932 systemd-logind[1304]: New session 22 of user core. Dec 13 01:55:34.620633 systemd[1]: Started session-22.scope. Dec 13 01:55:34.614000 audit[5355]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec1900b40 a2=3 a3=0 items=0 ppid=1 pid=5355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:34.627558 kernel: audit: type=1300 audit(1734054934.614:516): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec1900b40 a2=3 a3=0 items=0 ppid=1 pid=5355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:34.627621 kernel: audit: type=1327 audit(1734054934.614:516): proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:34.614000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:34.629793 update_engine[1307]: I1213 01:55:34.629760 1307 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 01:55:34.629793 update_engine[1307]: I1213 01:55:34.629791 1307 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 01:55:34.630045 kernel: audit: type=1105 audit(1734054934.623:517): pid=5355 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:34.623000 audit[5355]: USER_START pid=5355 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:34.630602 update_engine[1307]: I1213 01:55:34.630581 1307 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 01:55:34.631147 update_engine[1307]: I1213 01:55:34.631129 1307 omaha_request_params.cc:62] Current group set to lts Dec 13 01:55:34.632876 update_engine[1307]: I1213 01:55:34.632850 1307 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 01:55:34.632876 update_engine[1307]: I1213 01:55:34.632870 1307 update_attempter.cc:643] Scheduling an action processor start. Dec 13 01:55:34.633064 update_engine[1307]: I1213 01:55:34.633044 1307 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:55:34.633109 update_engine[1307]: I1213 01:55:34.633090 1307 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 01:55:34.633199 update_engine[1307]: I1213 01:55:34.633155 1307 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 01:55:34.633199 update_engine[1307]: I1213 01:55:34.633164 1307 omaha_request_action.cc:271] Request: Dec 13 01:55:34.633199 update_engine[1307]: Dec 13 01:55:34.633199 update_engine[1307]: Dec 13 01:55:34.633199 update_engine[1307]: Dec 13 01:55:34.633199 update_engine[1307]: Dec 13 01:55:34.633199 update_engine[1307]: Dec 13 01:55:34.633199 update_engine[1307]: Dec 13 01:55:34.633199 update_engine[1307]: Dec 13 01:55:34.633199 update_engine[1307]: Dec 13 01:55:34.633199 update_engine[1307]: I1213 01:55:34.633168 1307 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:55:34.635571 update_engine[1307]: I1213 01:55:34.635537 1307 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:55:34.635939 update_engine[1307]: I1213 01:55:34.635927 1307 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:55:34.636541 kernel: audit: type=1103 audit(1734054934.624:518): pid=5358 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:34.624000 audit[5358]: CRED_ACQ pid=5358 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:34.637334 locksmithd[1364]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 01:55:34.704551 update_engine[1307]: E1213 01:55:34.704427 1307 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:55:34.704551 update_engine[1307]: I1213 01:55:34.704535 1307 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 01:55:34.781378 sshd[5355]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:34.790524 kernel: audit: type=1106 audit(1734054934.781:519): pid=5355 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:34.790637 kernel: audit: type=1104 audit(1734054934.782:520): pid=5355 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:34.781000 audit[5355]: USER_END pid=5355 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:34.782000 audit[5355]: CRED_DISP pid=5355 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:34.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.88:22-10.0.0.1:46052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:34.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.88:22-10.0.0.1:46040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:34.783750 systemd[1]: Started sshd@22-10.0.0.88:22-10.0.0.1:46052.service. Dec 13 01:55:34.785109 systemd[1]: sshd@21-10.0.0.88:22-10.0.0.1:46040.service: Deactivated successfully. Dec 13 01:55:34.785956 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:55:34.786235 systemd-logind[1304]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:55:34.787157 systemd-logind[1304]: Removed session 22. Dec 13 01:55:34.816000 audit[5369]: USER_ACCT pid=5369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:34.818362 sshd[5369]: Accepted publickey for core from 10.0.0.1 port 46052 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:55:34.817000 audit[5369]: CRED_ACQ pid=5369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:34.817000 audit[5369]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf8ac3dd0 a2=3 a3=0 items=0 ppid=1 pid=5369 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:34.817000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:34.819426 sshd[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:34.822628 systemd-logind[1304]: New session 23 of user core. Dec 13 01:55:34.823248 systemd[1]: Started session-23.scope. Dec 13 01:55:34.826000 audit[5369]: USER_START pid=5369 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:34.827000 audit[5374]: CRED_ACQ pid=5374 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:35.340682 sshd[5369]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:35.340000 audit[5369]: USER_END pid=5369 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:35.340000 audit[5369]: CRED_DISP pid=5369 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:35.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.88:22-10.0.0.1:46056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:35.343469 systemd[1]: Started sshd@23-10.0.0.88:22-10.0.0.1:46056.service. Dec 13 01:55:35.344279 systemd[1]: sshd@22-10.0.0.88:22-10.0.0.1:46052.service: Deactivated successfully. Dec 13 01:55:35.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.88:22-10.0.0.1:46052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:35.345304 systemd-logind[1304]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:55:35.345319 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:55:35.346287 systemd-logind[1304]: Removed session 23. Dec 13 01:55:35.377000 audit[5382]: USER_ACCT pid=5382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:35.379295 sshd[5382]: Accepted publickey for core from 10.0.0.1 port 46056 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:55:35.378000 audit[5382]: CRED_ACQ pid=5382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:35.378000 audit[5382]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff77282ac0 a2=3 a3=0 items=0 ppid=1 pid=5382 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:35.378000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:35.380099 sshd[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:35.383148 systemd-logind[1304]: New session 24 of user core. Dec 13 01:55:35.384139 systemd[1]: Started session-24.scope. Dec 13 01:55:35.386000 audit[5382]: USER_START pid=5382 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:35.387000 audit[5386]: CRED_ACQ pid=5386 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:35.396953 env[1318]: time="2024-12-13T01:55:35.396305600Z" level=info msg="StopPodSandbox for \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\"" Dec 13 01:55:35.597070 env[1318]: 2024-12-13 01:55:35.556 [INFO][5404] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" Dec 13 01:55:35.597070 env[1318]: 2024-12-13 01:55:35.556 [INFO][5404] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" iface="eth0" netns="/var/run/netns/cni-be103c5c-6d3c-5739-081b-bff5c17b8875" Dec 13 01:55:35.597070 env[1318]: 2024-12-13 01:55:35.557 [INFO][5404] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" iface="eth0" netns="/var/run/netns/cni-be103c5c-6d3c-5739-081b-bff5c17b8875" Dec 13 01:55:35.597070 env[1318]: 2024-12-13 01:55:35.557 [INFO][5404] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" iface="eth0" netns="/var/run/netns/cni-be103c5c-6d3c-5739-081b-bff5c17b8875" Dec 13 01:55:35.597070 env[1318]: 2024-12-13 01:55:35.557 [INFO][5404] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" Dec 13 01:55:35.597070 env[1318]: 2024-12-13 01:55:35.557 [INFO][5404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" Dec 13 01:55:35.597070 env[1318]: 2024-12-13 01:55:35.585 [INFO][5417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" HandleID="k8s-pod-network.222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" Workload="localhost-k8s-calico--apiserver--7f458bd975--shd8j-eth0" Dec 13 01:55:35.597070 env[1318]: 2024-12-13 01:55:35.586 [INFO][5417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:35.597070 env[1318]: 2024-12-13 01:55:35.586 [INFO][5417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:35.597070 env[1318]: 2024-12-13 01:55:35.593 [WARNING][5417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" HandleID="k8s-pod-network.222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" Workload="localhost-k8s-calico--apiserver--7f458bd975--shd8j-eth0" Dec 13 01:55:35.597070 env[1318]: 2024-12-13 01:55:35.593 [INFO][5417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" HandleID="k8s-pod-network.222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" Workload="localhost-k8s-calico--apiserver--7f458bd975--shd8j-eth0" Dec 13 01:55:35.597070 env[1318]: 2024-12-13 01:55:35.594 [INFO][5417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:35.597070 env[1318]: 2024-12-13 01:55:35.595 [INFO][5404] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11" Dec 13 01:55:35.598137 env[1318]: time="2024-12-13T01:55:35.598085925Z" level=info msg="TearDown network for sandbox \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\" successfully" Dec 13 01:55:35.598231 env[1318]: time="2024-12-13T01:55:35.598206554Z" level=info msg="StopPodSandbox for \"222f970dd493942c4aad75028bf557e5d85577a2cf0dac028311d256437efd11\" returns successfully" Dec 13 01:55:35.600643 systemd[1]: run-netns-cni\x2dbe103c5c\x2d6d3c\x2d5739\x2d081b\x2dbff5c17b8875.mount: Deactivated successfully. Dec 13 01:55:35.601605 env[1318]: time="2024-12-13T01:55:35.600789446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f458bd975-shd8j,Uid:57931067-2814-4ccb-9fc1-1f61db24c542,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:55:35.708011 systemd-networkd[1092]: cali1db13b8fe00: Link UP Dec 13 01:55:35.708531 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1db13b8fe00: link becomes ready Dec 13 01:55:35.708155 systemd-networkd[1092]: cali1db13b8fe00: Gained carrier Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.648 [INFO][5430] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f458bd975--shd8j-eth0 calico-apiserver-7f458bd975- calico-apiserver 57931067-2814-4ccb-9fc1-1f61db24c542 1198 0 2024-12-13 01:54:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f458bd975 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f458bd975-shd8j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1db13b8fe00 [] []}} ContainerID="3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" Namespace="calico-apiserver" Pod="calico-apiserver-7f458bd975-shd8j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f458bd975--shd8j-" Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.648 [INFO][5430] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" Namespace="calico-apiserver" Pod="calico-apiserver-7f458bd975-shd8j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f458bd975--shd8j-eth0" Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.676 [INFO][5439] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" HandleID="k8s-pod-network.3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" Workload="localhost-k8s-calico--apiserver--7f458bd975--shd8j-eth0" Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.682 [INFO][5439] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" HandleID="k8s-pod-network.3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" Workload="localhost-k8s-calico--apiserver--7f458bd975--shd8j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002de7c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f458bd975-shd8j", "timestamp":"2024-12-13 01:55:35.676038863 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.682 [INFO][5439] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.682 [INFO][5439] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.682 [INFO][5439] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.684 [INFO][5439] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" host="localhost" Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.686 [INFO][5439] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.690 [INFO][5439] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.691 [INFO][5439] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.693 [INFO][5439] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.693 [INFO][5439] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" host="localhost" Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.694 [INFO][5439] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2 Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.697 [INFO][5439] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" host="localhost" Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.703 [INFO][5439] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" host="localhost" Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.703 [INFO][5439] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" host="localhost" Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.703 [INFO][5439] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:35.718985 env[1318]: 2024-12-13 01:55:35.703 [INFO][5439] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" HandleID="k8s-pod-network.3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" Workload="localhost-k8s-calico--apiserver--7f458bd975--shd8j-eth0" Dec 13 01:55:35.719869 env[1318]: 2024-12-13 01:55:35.705 [INFO][5430] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" Namespace="calico-apiserver" Pod="calico-apiserver-7f458bd975-shd8j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f458bd975--shd8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f458bd975--shd8j-eth0", GenerateName:"calico-apiserver-7f458bd975-", Namespace:"calico-apiserver", SelfLink:"", UID:"57931067-2814-4ccb-9fc1-1f61db24c542", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f458bd975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f458bd975-shd8j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1db13b8fe00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:35.719869 env[1318]: 2024-12-13 01:55:35.705 [INFO][5430] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" Namespace="calico-apiserver" Pod="calico-apiserver-7f458bd975-shd8j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f458bd975--shd8j-eth0" Dec 13 01:55:35.719869 env[1318]: 2024-12-13 01:55:35.705 [INFO][5430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1db13b8fe00 ContainerID="3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" Namespace="calico-apiserver" Pod="calico-apiserver-7f458bd975-shd8j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f458bd975--shd8j-eth0" Dec 13 01:55:35.719869 env[1318]: 2024-12-13 01:55:35.706 [INFO][5430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" Namespace="calico-apiserver" Pod="calico-apiserver-7f458bd975-shd8j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f458bd975--shd8j-eth0" Dec 13 01:55:35.719869 env[1318]: 2024-12-13 01:55:35.708 [INFO][5430] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" Namespace="calico-apiserver" Pod="calico-apiserver-7f458bd975-shd8j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f458bd975--shd8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f458bd975--shd8j-eth0", GenerateName:"calico-apiserver-7f458bd975-", Namespace:"calico-apiserver", SelfLink:"", UID:"57931067-2814-4ccb-9fc1-1f61db24c542", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f458bd975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2", Pod:"calico-apiserver-7f458bd975-shd8j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1db13b8fe00", MAC:"2a:d2:94:ba:1b:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:35.719869 env[1318]: 2024-12-13 01:55:35.717 [INFO][5430] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2" Namespace="calico-apiserver" Pod="calico-apiserver-7f458bd975-shd8j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f458bd975--shd8j-eth0" Dec 13 01:55:35.726000 audit[5464]: NETFILTER_CFG table=filter:112 family=2 entries=52 op=nft_register_chain pid=5464 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:55:35.726000 audit[5464]: SYSCALL arch=c000003e syscall=46 success=yes exit=26744 a0=3 a1=7ffc7328b0e0 a2=0 a3=7ffc7328b0cc items=0 ppid=4620 pid=5464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:35.726000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:55:35.730323 env[1318]: time="2024-12-13T01:55:35.730195783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:35.730323 env[1318]: time="2024-12-13T01:55:35.730291536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:35.730461 env[1318]: time="2024-12-13T01:55:35.730330650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:35.730536 env[1318]: time="2024-12-13T01:55:35.730495213Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2 pid=5471 runtime=io.containerd.runc.v2 Dec 13 01:55:35.749497 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:55:35.775320 env[1318]: time="2024-12-13T01:55:35.775283245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f458bd975-shd8j,Uid:57931067-2814-4ccb-9fc1-1f61db24c542,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2\"" Dec 13 01:55:35.777742 env[1318]: time="2024-12-13T01:55:35.777557690Z" level=info msg="CreateContainer within sandbox \"3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:55:35.792251 env[1318]: time="2024-12-13T01:55:35.792204367Z" level=info msg="CreateContainer within sandbox \"3f2e6b8b57448c4e2e63f300db29ffcc71af5e5c9d35c3eee536d5f57a4a96c2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c28d25b44c16f995a6a28ea219af5a3bebe85ea4b4a5f58d770cfb2419db0fcd\"" Dec 13 01:55:35.792765 env[1318]: time="2024-12-13T01:55:35.792618294Z" level=info msg="StartContainer for \"c28d25b44c16f995a6a28ea219af5a3bebe85ea4b4a5f58d770cfb2419db0fcd\"" Dec 13 01:55:35.841707 env[1318]: time="2024-12-13T01:55:35.841667581Z" level=info msg="StartContainer for \"c28d25b44c16f995a6a28ea219af5a3bebe85ea4b4a5f58d770cfb2419db0fcd\" returns successfully" Dec 13 01:55:36.640000 audit[5560]: NETFILTER_CFG table=filter:113 family=2 entries=8 op=nft_register_rule pid=5560 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:36.640000 audit[5560]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffcf5d5eb60 a2=0 a3=7ffcf5d5eb4c items=0 ppid=2407 pid=5560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:36.640000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:36.644000 audit[5560]: NETFILTER_CFG table=nat:114 family=2 entries=30 op=nft_register_rule pid=5560 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:36.644000 audit[5560]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffcf5d5eb60 a2=0 a3=7ffcf5d5eb4c items=0 ppid=2407 pid=5560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:36.644000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:36.649988 env[1318]: time="2024-12-13T01:55:36.649948606Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:55:36.653613 env[1318]: time="2024-12-13T01:55:36.653580612Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:55:36.656104 env[1318]: time="2024-12-13T01:55:36.656074323Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:55:36.658020 env[1318]: time="2024-12-13T01:55:36.657989264Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:55:36.658617 env[1318]: time="2024-12-13T01:55:36.658594093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:55:36.660260 env[1318]: time="2024-12-13T01:55:36.660224614Z" level=info msg="CreateContainer within sandbox \"3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:55:36.678626 env[1318]: time="2024-12-13T01:55:36.678574914Z" level=info msg="CreateContainer within sandbox \"3146f3dbb41c1d1132681ba3136ba596ba6b03fde1e40bc7f33036f05c91cec8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3915c949938150412fe615492b5fe5fb46256f4aae0d7630fc9595044b43d276\"" Dec 13 01:55:36.679295 env[1318]: time="2024-12-13T01:55:36.679223837Z" level=info msg="StartContainer for \"3915c949938150412fe615492b5fe5fb46256f4aae0d7630fc9595044b43d276\"" Dec 13 01:55:36.738547 env[1318]: time="2024-12-13T01:55:36.738498365Z" level=info msg="StartContainer for \"3915c949938150412fe615492b5fe5fb46256f4aae0d7630fc9595044b43d276\" returns successfully" Dec 13 01:55:36.813948 systemd[1]: run-containerd-runc-k8s.io-3915c949938150412fe615492b5fe5fb46256f4aae0d7630fc9595044b43d276-runc.6PcYrp.mount: Deactivated successfully. Dec 13 01:55:36.966404 kubelet[2228]: I1213 01:55:36.966189 2228 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f458bd975-shd8j" podStartSLOduration=76.96614092 podStartE2EDuration="1m16.96614092s" podCreationTimestamp="2024-12-13 01:54:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:36.627241964 +0000 UTC m=+95.366426645" watchObservedRunningTime="2024-12-13 01:55:36.96614092 +0000 UTC m=+95.705325602" Dec 13 01:55:36.984000 audit[5598]: NETFILTER_CFG table=filter:115 family=2 entries=8 op=nft_register_rule pid=5598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:36.984000 audit[5598]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe7eb24ad0 a2=0 a3=7ffe7eb24abc items=0 ppid=2407 pid=5598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:36.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:36.995000 audit[5598]: NETFILTER_CFG table=nat:116 family=2 entries=34 op=nft_register_chain pid=5598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:36.995000 audit[5598]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffe7eb24ad0 a2=0 a3=7ffe7eb24abc items=0 ppid=2407 pid=5598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:36.995000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:37.151911 sshd[5382]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:37.152000 audit[5382]: USER_END pid=5382 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:37.152000 audit[5382]: CRED_DISP pid=5382 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:37.154358 systemd[1]: Started sshd@24-10.0.0.88:22-10.0.0.1:55848.service. Dec 13 01:55:37.156983 systemd[1]: sshd@23-10.0.0.88:22-10.0.0.1:46056.service: Deactivated successfully. Dec 13 01:55:37.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.88:22-10.0.0.1:55848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:37.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.88:22-10.0.0.1:46056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:37.158011 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:55:37.158560 systemd-networkd[1092]: cali1db13b8fe00: Gained IPv6LL Dec 13 01:55:37.158874 systemd-logind[1304]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:55:37.160066 systemd-logind[1304]: Removed session 24. Dec 13 01:55:37.188000 audit[5600]: USER_ACCT pid=5600 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:37.190182 sshd[5600]: Accepted publickey for core from 10.0.0.1 port 55848 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:55:37.189000 audit[5600]: CRED_ACQ pid=5600 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:37.189000 audit[5600]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd6839db0 a2=3 a3=0 items=0 ppid=1 pid=5600 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:37.189000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:37.191466 sshd[5600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:37.196330 systemd-logind[1304]: New session 25 of user core. Dec 13 01:55:37.197036 systemd[1]: Started session-25.scope. Dec 13 01:55:37.200000 audit[5600]: USER_START pid=5600 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:37.201000 audit[5605]: CRED_ACQ pid=5605 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:37.500783 sshd[5600]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:37.502941 systemd[1]: Started sshd@25-10.0.0.88:22-10.0.0.1:55862.service. Dec 13 01:55:37.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.88:22-10.0.0.1:55862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:37.505000 audit[5600]: USER_END pid=5600 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:37.505000 audit[5600]: CRED_DISP pid=5600 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:37.507624 kubelet[2228]: I1213 01:55:37.507595 2228 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:55:37.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.88:22-10.0.0.1:55848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:37.508263 systemd[1]: sshd@24-10.0.0.88:22-10.0.0.1:55848.service: Deactivated successfully. Dec 13 01:55:37.510033 kubelet[2228]: I1213 01:55:37.510015 2228 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:55:37.511827 systemd-logind[1304]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:55:37.511936 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:55:37.512990 systemd-logind[1304]: Removed session 25. Dec 13 01:55:37.538000 audit[5613]: USER_ACCT pid=5613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:37.539614 sshd[5613]: Accepted publickey for core from 10.0.0.1 port 55862 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:55:37.539000 audit[5613]: CRED_ACQ pid=5613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:37.539000 audit[5613]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd6b0cfc30 a2=3 a3=0 items=0 ppid=1 pid=5613 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:37.539000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:37.540803 sshd[5613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:37.544454 systemd-logind[1304]: New session 26 of user core. Dec 13 01:55:37.545185 systemd[1]: Started session-26.scope. Dec 13 01:55:37.548000 audit[5613]: USER_START pid=5613 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:37.549000 audit[5619]: CRED_ACQ pid=5619 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:37.798487 kubelet[2228]: I1213 01:55:37.798350 2228 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-t2vq9" podStartSLOduration=70.763853549 podStartE2EDuration="1m16.798308365s" podCreationTimestamp="2024-12-13 01:54:21 +0000 UTC" firstStartedPulling="2024-12-13 01:55:30.624377 +0000 UTC m=+89.363561681" lastFinishedPulling="2024-12-13 01:55:36.658831816 +0000 UTC m=+95.398016497" observedRunningTime="2024-12-13 01:55:37.798297964 +0000 UTC m=+96.537482655" watchObservedRunningTime="2024-12-13 01:55:37.798308365 +0000 UTC m=+96.537493046" Dec 13 01:55:37.801781 sshd[5613]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:37.801000 audit[5613]: USER_END pid=5613 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:37.801000 audit[5613]: CRED_DISP pid=5613 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:37.804344 systemd[1]: sshd@25-10.0.0.88:22-10.0.0.1:55862.service: Deactivated successfully. Dec 13 01:55:37.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.88:22-10.0.0.1:55862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:37.805392 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:55:37.805396 systemd-logind[1304]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:55:37.806091 systemd-logind[1304]: Removed session 26. Dec 13 01:55:38.013000 audit[5631]: NETFILTER_CFG table=filter:117 family=2 entries=20 op=nft_register_rule pid=5631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:38.013000 audit[5631]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc60bbc360 a2=0 a3=7ffc60bbc34c items=0 ppid=2407 pid=5631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:38.013000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:38.018000 audit[5631]: NETFILTER_CFG table=nat:118 family=2 entries=22 op=nft_register_rule pid=5631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:38.018000 audit[5631]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc60bbc360 a2=0 a3=0 items=0 ppid=2407 pid=5631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:38.018000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:38.396618 env[1318]: time="2024-12-13T01:55:38.396518810Z" level=info msg="StopPodSandbox for \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\"" Dec 13 01:55:38.583663 env[1318]: 2024-12-13 01:55:38.553 [INFO][5648] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" Dec 13 01:55:38.583663 env[1318]: 2024-12-13 01:55:38.553 [INFO][5648] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" iface="eth0" netns="/var/run/netns/cni-6f136d83-1c97-5751-47a3-4e85d6569ec9" Dec 13 01:55:38.583663 env[1318]: 2024-12-13 01:55:38.553 [INFO][5648] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" iface="eth0" netns="/var/run/netns/cni-6f136d83-1c97-5751-47a3-4e85d6569ec9" Dec 13 01:55:38.583663 env[1318]: 2024-12-13 01:55:38.554 [INFO][5648] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" iface="eth0" netns="/var/run/netns/cni-6f136d83-1c97-5751-47a3-4e85d6569ec9" Dec 13 01:55:38.583663 env[1318]: 2024-12-13 01:55:38.554 [INFO][5648] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" Dec 13 01:55:38.583663 env[1318]: 2024-12-13 01:55:38.554 [INFO][5648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" Dec 13 01:55:38.583663 env[1318]: 2024-12-13 01:55:38.573 [INFO][5656] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" HandleID="k8s-pod-network.90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" Workload="localhost-k8s-coredns--76f75df574--qc46k-eth0" Dec 13 01:55:38.583663 env[1318]: 2024-12-13 01:55:38.574 [INFO][5656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:38.583663 env[1318]: 2024-12-13 01:55:38.574 [INFO][5656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:38.583663 env[1318]: 2024-12-13 01:55:38.579 [WARNING][5656] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" HandleID="k8s-pod-network.90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" Workload="localhost-k8s-coredns--76f75df574--qc46k-eth0" Dec 13 01:55:38.583663 env[1318]: 2024-12-13 01:55:38.579 [INFO][5656] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" HandleID="k8s-pod-network.90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" Workload="localhost-k8s-coredns--76f75df574--qc46k-eth0" Dec 13 01:55:38.583663 env[1318]: 2024-12-13 01:55:38.580 [INFO][5656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:38.583663 env[1318]: 2024-12-13 01:55:38.581 [INFO][5648] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5" Dec 13 01:55:38.584170 env[1318]: time="2024-12-13T01:55:38.583960428Z" level=info msg="TearDown network for sandbox \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\" successfully" Dec 13 01:55:38.584170 env[1318]: time="2024-12-13T01:55:38.584001536Z" level=info msg="StopPodSandbox for \"90832b60b7b89170e94fa7d92e266efa9b6ebc880342b8f2a7474149c3d7d3b5\" returns successfully" Dec 13 01:55:38.585591 kubelet[2228]: E1213 01:55:38.585549 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:38.586576 systemd[1]: run-netns-cni\x2d6f136d83\x2d1c97\x2d5751\x2d47a3\x2d4e85d6569ec9.mount: Deactivated successfully. Dec 13 01:55:38.587587 env[1318]: time="2024-12-13T01:55:38.586994283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qc46k,Uid:03656298-6b0b-422b-a3a9-1c9ae4e861d5,Namespace:kube-system,Attempt:1,}" Dec 13 01:55:39.092876 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:55:39.093015 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliec2fee95c7e: link becomes ready Dec 13 01:55:39.093393 systemd-networkd[1092]: caliec2fee95c7e: Link UP Dec 13 01:55:39.093588 systemd-networkd[1092]: caliec2fee95c7e: Gained carrier Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.030 [INFO][5663] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--qc46k-eth0 coredns-76f75df574- kube-system 03656298-6b0b-422b-a3a9-1c9ae4e861d5 1262 0 2024-12-13 01:54:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-qc46k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliec2fee95c7e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" Namespace="kube-system" Pod="coredns-76f75df574-qc46k" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qc46k-" Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.030 [INFO][5663] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" Namespace="kube-system" Pod="coredns-76f75df574-qc46k" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qc46k-eth0" Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.055 [INFO][5677] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" HandleID="k8s-pod-network.6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" Workload="localhost-k8s-coredns--76f75df574--qc46k-eth0" Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.062 [INFO][5677] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" HandleID="k8s-pod-network.6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" Workload="localhost-k8s-coredns--76f75df574--qc46k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dccc0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-qc46k", "timestamp":"2024-12-13 01:55:39.055783545 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.062 [INFO][5677] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.062 [INFO][5677] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.062 [INFO][5677] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.063 [INFO][5677] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" host="localhost" Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.066 [INFO][5677] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.070 [INFO][5677] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.071 [INFO][5677] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.073 [INFO][5677] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.073 [INFO][5677] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" host="localhost" Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.074 [INFO][5677] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61 Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.078 [INFO][5677] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" host="localhost" Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.087 [INFO][5677] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" host="localhost" Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.087 [INFO][5677] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" host="localhost" Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.087 [INFO][5677] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:39.103872 env[1318]: 2024-12-13 01:55:39.087 [INFO][5677] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" HandleID="k8s-pod-network.6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" Workload="localhost-k8s-coredns--76f75df574--qc46k-eth0" Dec 13 01:55:39.104822 env[1318]: 2024-12-13 01:55:39.089 [INFO][5663] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" Namespace="kube-system" Pod="coredns-76f75df574-qc46k" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qc46k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--qc46k-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"03656298-6b0b-422b-a3a9-1c9ae4e861d5", ResourceVersion:"1262", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-qc46k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec2fee95c7e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:39.104822 env[1318]: 2024-12-13 01:55:39.089 [INFO][5663] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" Namespace="kube-system" Pod="coredns-76f75df574-qc46k" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qc46k-eth0" Dec 13 01:55:39.104822 env[1318]: 2024-12-13 01:55:39.089 [INFO][5663] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec2fee95c7e ContainerID="6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" Namespace="kube-system" Pod="coredns-76f75df574-qc46k" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qc46k-eth0" Dec 13 01:55:39.104822 env[1318]: 2024-12-13 01:55:39.092 [INFO][5663] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" Namespace="kube-system" Pod="coredns-76f75df574-qc46k" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qc46k-eth0" Dec 13 01:55:39.104822 env[1318]: 2024-12-13 01:55:39.093 [INFO][5663] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" Namespace="kube-system" Pod="coredns-76f75df574-qc46k" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qc46k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--qc46k-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"03656298-6b0b-422b-a3a9-1c9ae4e861d5", ResourceVersion:"1262", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61", Pod:"coredns-76f75df574-qc46k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec2fee95c7e", MAC:"1e:b6:bd:7f:22:0f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:39.104822 env[1318]: 2024-12-13 01:55:39.102 [INFO][5663] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61" Namespace="kube-system" Pod="coredns-76f75df574-qc46k" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qc46k-eth0" Dec 13 01:55:39.111000 audit[5697]: NETFILTER_CFG table=filter:119 family=2 entries=48 op=nft_register_chain pid=5697 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:55:39.111000 audit[5697]: SYSCALL arch=c000003e syscall=46 success=yes exit=23432 a0=3 a1=7fffefefeb70 a2=0 a3=7fffefefeb5c items=0 ppid=4620 pid=5697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:39.111000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:55:39.117328 env[1318]: time="2024-12-13T01:55:39.117251479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:39.117328 env[1318]: time="2024-12-13T01:55:39.117313498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:39.117480 env[1318]: time="2024-12-13T01:55:39.117326482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:39.117687 env[1318]: time="2024-12-13T01:55:39.117620571Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61 pid=5705 runtime=io.containerd.runc.v2 Dec 13 01:55:39.138353 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:55:39.162441 env[1318]: time="2024-12-13T01:55:39.162388001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qc46k,Uid:03656298-6b0b-422b-a3a9-1c9ae4e861d5,Namespace:kube-system,Attempt:1,} returns sandbox id \"6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61\"" Dec 13 01:55:39.163244 kubelet[2228]: E1213 01:55:39.163213 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:39.166139 env[1318]: time="2024-12-13T01:55:39.165566549Z" level=info msg="CreateContainer within sandbox \"6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:55:39.178903 env[1318]: time="2024-12-13T01:55:39.178845544Z" level=info msg="CreateContainer within sandbox \"6f99fc5adfcf17b056b8f24827badb82bf3c5b3970555ca1a170bb0f2f5d4f61\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"21b3dccc7410bad3d3bf1abf0ddb8a3c5d9844e1f751492814d57ddf838b0bae\"" Dec 13 01:55:39.179481 env[1318]: time="2024-12-13T01:55:39.179454912Z" level=info msg="StartContainer for \"21b3dccc7410bad3d3bf1abf0ddb8a3c5d9844e1f751492814d57ddf838b0bae\"" Dec 13 01:55:39.216672 env[1318]: time="2024-12-13T01:55:39.216604361Z" level=info msg="StartContainer for \"21b3dccc7410bad3d3bf1abf0ddb8a3c5d9844e1f751492814d57ddf838b0bae\" returns successfully" Dec 13 01:55:39.622295 kubelet[2228]: E1213 01:55:39.622169 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:39.693499 kubelet[2228]: I1213 01:55:39.693462 2228 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qc46k" podStartSLOduration=84.693420456 podStartE2EDuration="1m24.693420456s" podCreationTimestamp="2024-12-13 01:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:39.684544275 +0000 UTC m=+98.423728946" watchObservedRunningTime="2024-12-13 01:55:39.693420456 +0000 UTC m=+98.432605127" Dec 13 01:55:39.697000 audit[5780]: NETFILTER_CFG table=filter:120 family=2 entries=32 op=nft_register_rule pid=5780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:39.699659 kernel: kauditd_printk_skb: 69 callbacks suppressed Dec 13 01:55:39.699707 kernel: audit: type=1325 audit(1734054939.697:566): table=filter:120 family=2 entries=32 op=nft_register_rule pid=5780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:39.697000 audit[5780]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffe37302350 a2=0 a3=7ffe3730233c items=0 ppid=2407 pid=5780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:39.706739 kernel: audit: type=1300 audit(1734054939.697:566): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffe37302350 a2=0 a3=7ffe3730233c items=0 ppid=2407 pid=5780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:39.697000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:39.709177 kernel: audit: type=1327 audit(1734054939.697:566): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:39.709000 audit[5780]: NETFILTER_CFG table=nat:121 family=2 entries=46 op=nft_register_rule pid=5780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:39.709000 audit[5780]: SYSCALL arch=c000003e syscall=46 success=yes exit=14964 a0=3 a1=7ffe37302350 a2=0 a3=7ffe3730233c items=0 ppid=2407 pid=5780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:39.760523 kernel: audit: type=1325 audit(1734054939.709:567): table=nat:121 family=2 entries=46 op=nft_register_rule pid=5780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:39.760700 kernel: audit: type=1300 audit(1734054939.709:567): arch=c000003e syscall=46 success=yes exit=14964 a0=3 a1=7ffe37302350 a2=0 a3=7ffe3730233c items=0 ppid=2407 pid=5780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:39.760722 kernel: audit: type=1327 audit(1734054939.709:567): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:39.709000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:39.772000 audit[5782]: NETFILTER_CFG table=filter:122 family=2 entries=32 op=nft_register_rule pid=5782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:39.772000 audit[5782]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffd50520c20 a2=0 a3=7ffd50520c0c items=0 ppid=2407 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:39.781797 kernel: audit: type=1325 audit(1734054939.772:568): table=filter:122 family=2 entries=32 op=nft_register_rule pid=5782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:39.781846 kernel: audit: type=1300 audit(1734054939.772:568): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffd50520c20 a2=0 a3=7ffd50520c0c items=0 ppid=2407 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:39.781865 kernel: audit: type=1327 audit(1734054939.772:568): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:39.772000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:39.781000 audit[5782]: NETFILTER_CFG table=nat:123 family=2 entries=58 op=nft_register_chain pid=5782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:39.786489 kernel: audit: type=1325 audit(1734054939.781:569): table=nat:123 family=2 entries=58 op=nft_register_chain pid=5782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:39.781000 audit[5782]: SYSCALL arch=c000003e syscall=46 success=yes exit=20628 a0=3 a1=7ffd50520c20 a2=0 a3=7ffd50520c0c items=0 ppid=2407 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:39.781000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:40.623969 kubelet[2228]: E1213 01:55:40.623935 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:40.805462 systemd-networkd[1092]: caliec2fee95c7e: Gained IPv6LL Dec 13 01:55:41.625784 kubelet[2228]: E1213 01:55:41.625750 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:42.804713 systemd[1]: Started sshd@26-10.0.0.88:22-10.0.0.1:55878.service. Dec 13 01:55:42.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.88:22-10.0.0.1:55878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:42.839000 audit[5783]: USER_ACCT pid=5783 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:42.840647 sshd[5783]: Accepted publickey for core from 10.0.0.1 port 55878 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:55:42.840000 audit[5783]: CRED_ACQ pid=5783 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:42.840000 audit[5783]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc29795080 a2=3 a3=0 items=0 ppid=1 pid=5783 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:42.840000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:42.841903 sshd[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:42.845872 systemd-logind[1304]: New session 27 of user core. Dec 13 01:55:42.846608 systemd[1]: Started session-27.scope. Dec 13 01:55:42.849000 audit[5783]: USER_START pid=5783 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:42.851000 audit[5786]: CRED_ACQ pid=5786 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:42.968575 sshd[5783]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:42.968000 audit[5783]: USER_END pid=5783 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:42.968000 audit[5783]: CRED_DISP pid=5783 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:42.971121 systemd[1]: sshd@26-10.0.0.88:22-10.0.0.1:55878.service: Deactivated successfully. Dec 13 01:55:42.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.88:22-10.0.0.1:55878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:42.972178 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:55:42.972224 systemd-logind[1304]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:55:42.972872 systemd-logind[1304]: Removed session 27. Dec 13 01:55:44.143000 audit[5800]: NETFILTER_CFG table=filter:124 family=2 entries=20 op=nft_register_rule pid=5800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:44.143000 audit[5800]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe2b60bf80 a2=0 a3=7ffe2b60bf6c items=0 ppid=2407 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:44.143000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:44.149000 audit[5800]: NETFILTER_CFG table=nat:125 family=2 entries=106 op=nft_register_chain pid=5800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:55:44.149000 audit[5800]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffe2b60bf80 a2=0 a3=7ffe2b60bf6c items=0 ppid=2407 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:44.149000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:55:44.628199 update_engine[1307]: I1213 01:55:44.628139 1307 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:55:44.628642 update_engine[1307]: I1213 01:55:44.628376 1307 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:55:44.628642 update_engine[1307]: I1213 01:55:44.628522 1307 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:55:44.639025 update_engine[1307]: E1213 01:55:44.638975 1307 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:55:44.639174 update_engine[1307]: I1213 01:55:44.639074 1307 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 01:55:47.971719 systemd[1]: Started sshd@27-10.0.0.88:22-10.0.0.1:59602.service. Dec 13 01:55:47.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.88:22-10.0.0.1:59602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:47.976287 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 13 01:55:47.976355 kernel: audit: type=1130 audit(1734054947.970:581): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.88:22-10.0.0.1:59602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:48.006000 audit[5828]: USER_ACCT pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:48.008069 sshd[5828]: Accepted publickey for core from 10.0.0.1 port 59602 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:55:48.008000 audit[5828]: CRED_ACQ pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:48.009686 sshd[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:48.013077 systemd-logind[1304]: New session 28 of user core. Dec 13 01:55:48.014028 systemd[1]: Started session-28.scope. Dec 13 01:55:48.016662 kernel: audit: type=1101 audit(1734054948.006:582): pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:48.016716 kernel: audit: type=1103 audit(1734054948.008:583): pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:48.019223 kernel: audit: type=1006 audit(1734054948.008:584): pid=5828 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Dec 13 01:55:48.008000 audit[5828]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe016cb800 a2=3 a3=0 items=0 ppid=1 pid=5828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:48.030410 kernel: audit: type=1300 audit(1734054948.008:584): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe016cb800 a2=3 a3=0 items=0 ppid=1 pid=5828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:48.030447 kernel: audit: type=1327 audit(1734054948.008:584): proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:48.030466 kernel: audit: type=1105 audit(1734054948.017:585): pid=5828 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:48.030486 kernel: audit: type=1103 audit(1734054948.018:586): pid=5831 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:48.008000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:48.017000 audit[5828]: USER_START pid=5828 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:48.018000 audit[5831]: CRED_ACQ pid=5831 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:48.139490 sshd[5828]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:48.139000 audit[5828]: USER_END pid=5828 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:48.141513 systemd[1]: sshd@27-10.0.0.88:22-10.0.0.1:59602.service: Deactivated successfully. Dec 13 01:55:48.142432 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:55:48.143045 systemd-logind[1304]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:55:48.143902 systemd-logind[1304]: Removed session 28. Dec 13 01:55:48.139000 audit[5828]: CRED_DISP pid=5828 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:48.149770 kernel: audit: type=1106 audit(1734054948.139:587): pid=5828 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:48.149837 kernel: audit: type=1104 audit(1734054948.139:588): pid=5828 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:48.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.88:22-10.0.0.1:59602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:52.168578 kubelet[2228]: E1213 01:55:52.168554 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:55:53.142195 systemd[1]: Started sshd@28-10.0.0.88:22-10.0.0.1:59616.service. Dec 13 01:55:53.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.88:22-10.0.0.1:59616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:53.149670 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 01:55:53.149721 kernel: audit: type=1130 audit(1734054953.141:590): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.88:22-10.0.0.1:59616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:53.176000 audit[5866]: USER_ACCT pid=5866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:53.182735 sshd[5866]: Accepted publickey for core from 10.0.0.1 port 59616 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:55:53.183317 kernel: audit: type=1101 audit(1734054953.176:591): pid=5866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:53.182000 audit[5866]: CRED_ACQ pid=5866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:53.184138 sshd[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:53.189734 systemd-logind[1304]: New session 29 of user core. Dec 13 01:55:53.190039 systemd[1]: Started session-29.scope. Dec 13 01:55:53.190514 kernel: audit: type=1103 audit(1734054953.182:592): pid=5866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:53.190595 kernel: audit: type=1006 audit(1734054953.182:593): pid=5866 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Dec 13 01:55:53.190619 kernel: audit: type=1300 audit(1734054953.182:593): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc408102c0 a2=3 a3=0 items=0 ppid=1 pid=5866 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:53.208441 kernel: audit: type=1327 audit(1734054953.182:593): proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:53.208506 kernel: audit: type=1105 audit(1734054953.197:594): pid=5866 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:53.208525 kernel: audit: type=1103 audit(1734054953.198:595): pid=5869 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:53.182000 audit[5866]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc408102c0 a2=3 a3=0 items=0 ppid=1 pid=5866 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:53.182000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:53.197000 audit[5866]: USER_START pid=5866 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:53.198000 audit[5869]: CRED_ACQ pid=5869 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:53.319932 sshd[5866]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:53.319000 audit[5866]: USER_END pid=5866 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:53.322570 systemd[1]: sshd@28-10.0.0.88:22-10.0.0.1:59616.service: Deactivated successfully. Dec 13 01:55:53.323334 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 01:55:53.325300 kernel: audit: type=1106 audit(1734054953.319:596): pid=5866 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:53.319000 audit[5866]: CRED_DISP pid=5866 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:53.325767 systemd-logind[1304]: Session 29 logged out. Waiting for processes to exit. Dec 13 01:55:53.326490 systemd-logind[1304]: Removed session 29. Dec 13 01:55:53.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.88:22-10.0.0.1:59616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:53.329307 kernel: audit: type=1104 audit(1734054953.319:597): pid=5866 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:54.628383 update_engine[1307]: I1213 01:55:54.628309 1307 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:55:54.628865 update_engine[1307]: I1213 01:55:54.628592 1307 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:55:54.628865 update_engine[1307]: I1213 01:55:54.628818 1307 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:55:54.646364 update_engine[1307]: E1213 01:55:54.646315 1307 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:55:54.646535 update_engine[1307]: I1213 01:55:54.646396 1307 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 01:55:58.323380 systemd[1]: Started sshd@29-10.0.0.88:22-10.0.0.1:44650.service. Dec 13 01:55:58.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.88:22-10.0.0.1:44650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:58.324476 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 01:55:58.332841 kernel: audit: type=1130 audit(1734054958.323:599): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.88:22-10.0.0.1:44650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:58.358000 audit[5882]: USER_ACCT pid=5882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:58.358922 sshd[5882]: Accepted publickey for core from 10.0.0.1 port 44650 ssh2: RSA SHA256:AszxYrj4gj258y44bVsPwwDC94LR0fHfgjHsFkIPyiw Dec 13 01:55:58.362000 audit[5882]: CRED_ACQ pid=5882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:58.363256 sshd[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:58.366514 kernel: audit: type=1101 audit(1734054958.358:600): pid=5882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:58.366635 kernel: audit: type=1103 audit(1734054958.362:601): pid=5882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:58.366659 kernel: audit: type=1006 audit(1734054958.362:602): pid=5882 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Dec 13 01:55:58.367122 systemd-logind[1304]: New session 30 of user core. Dec 13 01:55:58.367876 systemd[1]: Started session-30.scope. Dec 13 01:55:58.362000 audit[5882]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb18a3e70 a2=3 a3=0 items=0 ppid=1 pid=5882 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:58.372729 kernel: audit: type=1300 audit(1734054958.362:602): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb18a3e70 a2=3 a3=0 items=0 ppid=1 pid=5882 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:55:58.372788 kernel: audit: type=1327 audit(1734054958.362:602): proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:58.362000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:55:58.372000 audit[5882]: USER_START pid=5882 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:58.378381 kernel: audit: type=1105 audit(1734054958.372:603): pid=5882 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:58.378420 kernel: audit: type=1103 audit(1734054958.373:604): pid=5885 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:58.373000 audit[5885]: CRED_ACQ pid=5885 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:58.489880 sshd[5882]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:58.490000 audit[5882]: USER_END pid=5882 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:58.492559 systemd[1]: sshd@29-10.0.0.88:22-10.0.0.1:44650.service: Deactivated successfully. Dec 13 01:55:58.493754 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 01:55:58.493864 systemd-logind[1304]: Session 30 logged out. Waiting for processes to exit. Dec 13 01:55:58.494865 systemd-logind[1304]: Removed session 30. Dec 13 01:55:58.490000 audit[5882]: CRED_DISP pid=5882 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:58.499723 kernel: audit: type=1106 audit(1734054958.490:605): pid=5882 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:58.499779 kernel: audit: type=1104 audit(1734054958.490:606): pid=5882 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:55:58.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.88:22-10.0.0.1:44650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:55:59.398139 kubelet[2228]: E1213 01:55:59.398108 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"