May 10 02:14:32.930780 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 9 23:12:23 -00 2025 May 10 02:14:32.930822 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 02:14:32.930842 kernel: BIOS-provided physical RAM map: May 10 02:14:32.930853 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 10 02:14:32.930862 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 10 02:14:32.930871 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 10 02:14:32.930883 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable May 10 02:14:32.930893 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved May 10 02:14:32.930902 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 10 02:14:32.930912 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 10 02:14:32.930926 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 10 02:14:32.930936 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 10 02:14:32.930946 kernel: NX (Execute Disable) protection: active May 10 02:14:32.930956 kernel: SMBIOS 2.8 present. May 10 02:14:32.930968 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 May 10 02:14:32.930979 kernel: Hypervisor detected: KVM May 10 02:14:32.931006 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 10 02:14:32.931016 kernel: kvm-clock: cpu 0, msr 17196001, primary cpu clock May 10 02:14:32.931027 kernel: kvm-clock: using sched offset of 4805651184 cycles May 10 02:14:32.931038 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 10 02:14:32.931048 kernel: tsc: Detected 2499.998 MHz processor May 10 02:14:32.931071 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 10 02:14:32.931082 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 10 02:14:32.931093 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 May 10 02:14:32.931103 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 10 02:14:32.931119 kernel: Using GB pages for direct mapping May 10 02:14:32.931129 kernel: ACPI: Early table checksum verification disabled May 10 02:14:32.931140 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) May 10 02:14:32.931150 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 02:14:32.931161 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 10 02:14:32.931172 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 02:14:32.931182 kernel: ACPI: FACS 0x000000007FFDFD40 000040 May 10 02:14:32.931193 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 02:14:32.931214 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 02:14:32.931231 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 02:14:32.931242 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 02:14:32.931252 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] May 10 02:14:32.931263 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] May 10 02:14:32.931273 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] May 10 02:14:32.931284 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] May 10 02:14:32.931300 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] May 10 02:14:32.931371 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] May 10 02:14:32.931383 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] May 10 02:14:32.931395 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 10 02:14:32.931406 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 10 02:14:32.931417 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 10 02:14:32.931428 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 May 10 02:14:32.931440 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 10 02:14:32.931455 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 May 10 02:14:32.931467 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 10 02:14:32.931490 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 May 10 02:14:32.931501 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 10 02:14:32.931512 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 May 10 02:14:32.931523 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 10 02:14:32.931533 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 May 10 02:14:32.931548 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 10 02:14:32.931558 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 May 10 02:14:32.931569 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 10 02:14:32.931597 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 May 10 02:14:32.931609 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 10 02:14:32.931624 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 10 02:14:32.931636 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug May 10 02:14:32.931647 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] May 10 02:14:32.931659 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] May 10 02:14:32.931670 kernel: Zone ranges: May 10 02:14:32.931681 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 10 02:14:32.931693 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] May 10 02:14:32.931718 kernel: Normal empty May 10 02:14:32.931730 kernel: Movable zone start for each node May 10 02:14:32.931741 kernel: Early memory node ranges May 10 02:14:32.931752 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 10 02:14:32.931771 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] May 10 02:14:32.931783 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] May 10 02:14:32.931794 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 10 02:14:32.931805 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 10 02:14:32.931816 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges May 10 02:14:32.931832 kernel: ACPI: PM-Timer IO Port: 0x608 May 10 02:14:32.931843 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 10 02:14:32.931855 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 10 02:14:32.931866 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 10 02:14:32.931877 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 10 02:14:32.931889 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 10 02:14:32.931900 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 10 02:14:32.931911 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 10 02:14:32.931923 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 10 02:14:32.931938 kernel: TSC deadline timer available May 10 02:14:32.931949 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs May 10 02:14:32.931961 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 10 02:14:32.931972 kernel: Booting paravirtualized kernel on KVM May 10 02:14:32.931983 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 10 02:14:32.931995 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 May 10 02:14:32.932006 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 May 10 02:14:32.932018 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 May 10 02:14:32.932029 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 May 10 02:14:32.932044 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 May 10 02:14:32.932055 kernel: kvm-guest: PV spinlocks enabled May 10 02:14:32.932067 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 10 02:14:32.932078 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 May 10 02:14:32.932089 kernel: Policy zone: DMA32 May 10 02:14:32.932102 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 02:14:32.932114 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 02:14:32.932125 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 10 02:14:32.932140 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 10 02:14:32.932152 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 02:14:32.932164 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 192524K reserved, 0K cma-reserved) May 10 02:14:32.932175 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 May 10 02:14:32.932187 kernel: Kernel/User page tables isolation: enabled May 10 02:14:32.932198 kernel: ftrace: allocating 34584 entries in 136 pages May 10 02:14:32.932221 kernel: ftrace: allocated 136 pages with 2 groups May 10 02:14:32.932233 kernel: rcu: Hierarchical RCU implementation. May 10 02:14:32.932245 kernel: rcu: RCU event tracing is enabled. May 10 02:14:32.932261 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. May 10 02:14:32.932273 kernel: Rude variant of Tasks RCU enabled. May 10 02:14:32.932285 kernel: Tracing variant of Tasks RCU enabled. May 10 02:14:32.932297 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 02:14:32.933417 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 May 10 02:14:32.933433 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 May 10 02:14:32.933445 kernel: random: crng init done May 10 02:14:32.933472 kernel: Console: colour VGA+ 80x25 May 10 02:14:32.933487 kernel: printk: console [tty0] enabled May 10 02:14:32.933499 kernel: printk: console [ttyS0] enabled May 10 02:14:32.933511 kernel: ACPI: Core revision 20210730 May 10 02:14:32.933523 kernel: APIC: Switch to symmetric I/O mode setup May 10 02:14:32.933539 kernel: x2apic enabled May 10 02:14:32.933551 kernel: Switched APIC routing to physical x2apic. May 10 02:14:32.933563 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns May 10 02:14:32.933576 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) May 10 02:14:32.933596 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 10 02:14:32.933612 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 10 02:14:32.933624 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 10 02:14:32.933636 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 10 02:14:32.933648 kernel: Spectre V2 : Mitigation: Retpolines May 10 02:14:32.933666 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 10 02:14:32.933678 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 10 02:14:32.933690 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 10 02:14:32.933702 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 10 02:14:32.933714 kernel: MDS: Mitigation: Clear CPU buffers May 10 02:14:32.933727 kernel: MMIO Stale Data: Unknown: No mitigations May 10 02:14:32.933739 kernel: SRBDS: Unknown: Dependent on hypervisor status May 10 02:14:32.933755 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 10 02:14:32.933767 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 10 02:14:32.933779 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 10 02:14:32.933791 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 10 02:14:32.933803 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 10 02:14:32.933815 kernel: Freeing SMP alternatives memory: 32K May 10 02:14:32.933827 kernel: pid_max: default: 32768 minimum: 301 May 10 02:14:32.933838 kernel: LSM: Security Framework initializing May 10 02:14:32.933850 kernel: SELinux: Initializing. May 10 02:14:32.933862 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 10 02:14:32.933874 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 10 02:14:32.933891 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) May 10 02:14:32.933903 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. May 10 02:14:32.933915 kernel: signal: max sigframe size: 1776 May 10 02:14:32.933927 kernel: rcu: Hierarchical SRCU implementation. May 10 02:14:32.933939 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 10 02:14:32.933951 kernel: smp: Bringing up secondary CPUs ... May 10 02:14:32.933963 kernel: x86: Booting SMP configuration: May 10 02:14:32.933975 kernel: .... node #0, CPUs: #1 May 10 02:14:32.933988 kernel: kvm-clock: cpu 1, msr 17196041, secondary cpu clock May 10 02:14:32.934017 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 10 02:14:32.934029 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 May 10 02:14:32.934040 kernel: smp: Brought up 1 node, 2 CPUs May 10 02:14:32.934052 kernel: smpboot: Max logical packages: 16 May 10 02:14:32.934064 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) May 10 02:14:32.934075 kernel: devtmpfs: initialized May 10 02:14:32.934087 kernel: x86/mm: Memory block size: 128MB May 10 02:14:32.934098 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 02:14:32.934110 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) May 10 02:14:32.934125 kernel: pinctrl core: initialized pinctrl subsystem May 10 02:14:32.934137 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 02:14:32.934149 kernel: audit: initializing netlink subsys (disabled) May 10 02:14:32.934160 kernel: audit: type=2000 audit(1746843272.085:1): state=initialized audit_enabled=0 res=1 May 10 02:14:32.934172 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 02:14:32.934196 kernel: thermal_sys: Registered thermal governor 'user_space' May 10 02:14:32.934221 kernel: cpuidle: using governor menu May 10 02:14:32.934233 kernel: ACPI: bus type PCI registered May 10 02:14:32.934246 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 02:14:32.934263 kernel: dca service started, version 1.12.1 May 10 02:14:32.934275 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 10 02:14:32.934287 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 10 02:14:32.934299 kernel: PCI: Using configuration type 1 for base access May 10 02:14:32.934324 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 10 02:14:32.934337 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 10 02:14:32.934349 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 10 02:14:32.934361 kernel: ACPI: Added _OSI(Module Device) May 10 02:14:32.934373 kernel: ACPI: Added _OSI(Processor Device) May 10 02:14:32.934390 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 02:14:32.934403 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 02:14:32.934415 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 10 02:14:32.934427 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 10 02:14:32.934439 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 10 02:14:32.934451 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 10 02:14:32.934463 kernel: ACPI: Interpreter enabled May 10 02:14:32.934475 kernel: ACPI: PM: (supports S0 S5) May 10 02:14:32.934486 kernel: ACPI: Using IOAPIC for interrupt routing May 10 02:14:32.934503 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 10 02:14:32.934515 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 10 02:14:32.934527 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 02:14:32.934806 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 10 02:14:32.934974 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 10 02:14:32.935130 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 10 02:14:32.935148 kernel: PCI host bridge to bus 0000:00 May 10 02:14:32.935327 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 10 02:14:32.935481 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 10 02:14:32.935624 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 10 02:14:32.935782 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 10 02:14:32.935928 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 10 02:14:32.936075 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] May 10 02:14:32.936231 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 02:14:32.936429 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 10 02:14:32.936649 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 May 10 02:14:32.936807 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] May 10 02:14:32.936969 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] May 10 02:14:32.937157 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] May 10 02:14:32.937346 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 10 02:14:32.937516 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 10 02:14:32.937679 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] May 10 02:14:32.937855 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 10 02:14:32.938012 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] May 10 02:14:32.938199 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 10 02:14:32.944425 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] May 10 02:14:32.944612 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 10 02:14:32.944807 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] May 10 02:14:32.944979 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 10 02:14:32.945155 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] May 10 02:14:32.945355 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 10 02:14:32.945516 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] May 10 02:14:32.945681 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 10 02:14:32.945867 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] May 10 02:14:32.946042 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 10 02:14:32.946218 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] May 10 02:14:32.946416 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 10 02:14:32.946584 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] May 10 02:14:32.946745 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] May 10 02:14:32.946923 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] May 10 02:14:32.947095 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] May 10 02:14:32.947273 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 10 02:14:32.947452 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 10 02:14:32.947608 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] May 10 02:14:32.947774 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] May 10 02:14:32.947950 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 10 02:14:32.948117 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 10 02:14:32.956385 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 10 02:14:32.956559 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] May 10 02:14:32.956721 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] May 10 02:14:32.956895 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 10 02:14:32.957062 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 10 02:14:32.957253 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 May 10 02:14:32.957455 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] May 10 02:14:32.957616 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] May 10 02:14:32.957771 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] May 10 02:14:32.957926 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 10 02:14:32.958113 kernel: pci_bus 0000:02: extended config space not accessible May 10 02:14:32.958341 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 May 10 02:14:32.958521 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] May 10 02:14:32.958683 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] May 10 02:14:32.958848 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] May 10 02:14:32.959015 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 May 10 02:14:32.959178 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] May 10 02:14:32.959369 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] May 10 02:14:32.959526 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] May 10 02:14:32.959688 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 10 02:14:32.959884 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 May 10 02:14:32.960044 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] May 10 02:14:32.960229 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] May 10 02:14:32.960419 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] May 10 02:14:32.960574 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 10 02:14:32.960726 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] May 10 02:14:32.960914 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] May 10 02:14:32.961089 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 10 02:14:32.961269 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] May 10 02:14:32.961441 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] May 10 02:14:32.961596 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 10 02:14:32.961750 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] May 10 02:14:32.961911 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] May 10 02:14:32.962076 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 10 02:14:32.962280 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] May 10 02:14:32.962449 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] May 10 02:14:32.962609 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 10 02:14:32.962764 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] May 10 02:14:32.962974 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] May 10 02:14:32.963123 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 10 02:14:32.963141 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 10 02:14:32.963154 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 10 02:14:32.963166 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 10 02:14:32.963217 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 10 02:14:32.963230 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 10 02:14:32.963243 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 10 02:14:32.963254 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 10 02:14:32.963267 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 10 02:14:32.963278 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 10 02:14:32.963290 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 10 02:14:32.971353 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 10 02:14:32.971376 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 10 02:14:32.971397 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 10 02:14:32.971409 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 10 02:14:32.971421 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 10 02:14:32.971433 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 10 02:14:32.971446 kernel: iommu: Default domain type: Translated May 10 02:14:32.971458 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 10 02:14:32.971629 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 10 02:14:32.971871 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 10 02:14:32.972048 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 10 02:14:32.972067 kernel: vgaarb: loaded May 10 02:14:32.972080 kernel: pps_core: LinuxPPS API ver. 1 registered May 10 02:14:32.972092 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 10 02:14:32.972104 kernel: PTP clock support registered May 10 02:14:32.972116 kernel: PCI: Using ACPI for IRQ routing May 10 02:14:32.972128 kernel: PCI: pci_cache_line_size set to 64 bytes May 10 02:14:32.972140 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 10 02:14:32.972152 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] May 10 02:14:32.972170 kernel: clocksource: Switched to clocksource kvm-clock May 10 02:14:32.972182 kernel: VFS: Disk quotas dquot_6.6.0 May 10 02:14:32.972195 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 02:14:32.972220 kernel: pnp: PnP ACPI init May 10 02:14:32.972436 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 10 02:14:32.972457 kernel: pnp: PnP ACPI: found 5 devices May 10 02:14:32.972470 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 10 02:14:32.972482 kernel: NET: Registered PF_INET protocol family May 10 02:14:32.972501 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 10 02:14:32.972514 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 10 02:14:32.972527 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 02:14:32.972539 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 10 02:14:32.972561 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) May 10 02:14:32.972585 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 10 02:14:32.972597 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 10 02:14:32.972609 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 10 02:14:32.972628 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 02:14:32.972644 kernel: NET: Registered PF_XDP protocol family May 10 02:14:32.972816 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 May 10 02:14:32.972972 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 10 02:14:32.973139 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 10 02:14:32.973332 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 10 02:14:32.973503 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 10 02:14:32.973669 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 10 02:14:32.973842 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 10 02:14:32.974006 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 10 02:14:32.974171 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] May 10 02:14:32.974352 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] May 10 02:14:32.974520 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] May 10 02:14:32.974700 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] May 10 02:14:32.974863 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] May 10 02:14:32.975026 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] May 10 02:14:32.975188 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] May 10 02:14:32.975378 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] May 10 02:14:32.975564 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] May 10 02:14:32.975723 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] May 10 02:14:32.975877 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] May 10 02:14:32.976030 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 10 02:14:32.976223 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] May 10 02:14:32.976413 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 10 02:14:32.976589 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] May 10 02:14:32.976766 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 10 02:14:32.976929 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] May 10 02:14:32.977100 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 10 02:14:32.977297 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] May 10 02:14:32.977474 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 10 02:14:32.977663 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] May 10 02:14:32.977849 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 10 02:14:32.978029 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] May 10 02:14:32.978232 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 10 02:14:32.985442 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] May 10 02:14:32.985602 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 10 02:14:32.985765 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] May 10 02:14:32.985922 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 10 02:14:32.986083 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] May 10 02:14:32.986264 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 10 02:14:32.986441 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] May 10 02:14:32.986596 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 10 02:14:32.986749 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] May 10 02:14:32.986910 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 10 02:14:32.987077 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] May 10 02:14:32.987252 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 10 02:14:32.987425 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] May 10 02:14:32.987581 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 10 02:14:32.987735 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] May 10 02:14:32.987904 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 10 02:14:32.988052 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] May 10 02:14:32.988237 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 10 02:14:32.988411 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 10 02:14:32.988555 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 10 02:14:32.988696 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 10 02:14:32.988836 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 10 02:14:32.988977 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 10 02:14:32.989118 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] May 10 02:14:32.989296 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 10 02:14:32.989479 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] May 10 02:14:32.989628 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] May 10 02:14:32.989795 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] May 10 02:14:32.989952 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] May 10 02:14:32.990100 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] May 10 02:14:32.990264 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] May 10 02:14:32.990454 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] May 10 02:14:32.990604 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] May 10 02:14:32.990755 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] May 10 02:14:32.990917 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] May 10 02:14:32.991066 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] May 10 02:14:32.991227 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] May 10 02:14:32.991442 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] May 10 02:14:32.991613 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] May 10 02:14:32.991787 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] May 10 02:14:32.991969 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] May 10 02:14:32.992150 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] May 10 02:14:32.996086 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] May 10 02:14:32.996273 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] May 10 02:14:32.996491 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] May 10 02:14:32.996662 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] May 10 02:14:32.996835 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] May 10 02:14:32.996983 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] May 10 02:14:32.997139 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] May 10 02:14:32.997159 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 10 02:14:32.997172 kernel: PCI: CLS 0 bytes, default 64 May 10 02:14:32.997187 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 10 02:14:32.997210 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) May 10 02:14:32.997232 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 10 02:14:32.997245 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns May 10 02:14:32.997265 kernel: Initialise system trusted keyrings May 10 02:14:32.997278 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 10 02:14:32.997291 kernel: Key type asymmetric registered May 10 02:14:32.997313 kernel: Asymmetric key parser 'x509' registered May 10 02:14:32.997327 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 10 02:14:32.997340 kernel: io scheduler mq-deadline registered May 10 02:14:32.997358 kernel: io scheduler kyber registered May 10 02:14:32.997372 kernel: io scheduler bfq registered May 10 02:14:32.997529 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 May 10 02:14:32.997686 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 May 10 02:14:32.997851 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 02:14:32.998025 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 May 10 02:14:32.998223 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 May 10 02:14:32.998404 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 02:14:32.998569 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 May 10 02:14:32.998730 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 May 10 02:14:32.998884 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 02:14:32.999039 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 May 10 02:14:32.999193 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 May 10 02:14:32.999378 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 02:14:32.999543 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 May 10 02:14:32.999697 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 May 10 02:14:32.999850 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 02:14:33.000008 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 May 10 02:14:33.000172 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 May 10 02:14:33.000353 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 02:14:33.000519 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 May 10 02:14:33.000674 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 May 10 02:14:33.000830 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 02:14:33.000987 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 May 10 02:14:33.001142 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 May 10 02:14:33.001321 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 02:14:33.001349 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 10 02:14:33.001362 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 10 02:14:33.001375 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 10 02:14:33.001388 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 02:14:33.001401 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 10 02:14:33.001414 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 10 02:14:33.001427 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 10 02:14:33.001444 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 10 02:14:33.001618 kernel: rtc_cmos 00:03: RTC can wake from S4 May 10 02:14:33.001639 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 10 02:14:33.001784 kernel: rtc_cmos 00:03: registered as rtc0 May 10 02:14:33.001930 kernel: rtc_cmos 00:03: setting system clock to 2025-05-10T02:14:32 UTC (1746843272) May 10 02:14:33.002084 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 10 02:14:33.002103 kernel: intel_pstate: CPU model not supported May 10 02:14:33.002116 kernel: NET: Registered PF_INET6 protocol family May 10 02:14:33.002135 kernel: Segment Routing with IPv6 May 10 02:14:33.002148 kernel: In-situ OAM (IOAM) with IPv6 May 10 02:14:33.002161 kernel: NET: Registered PF_PACKET protocol family May 10 02:14:33.002178 kernel: Key type dns_resolver registered May 10 02:14:33.002191 kernel: IPI shorthand broadcast: enabled May 10 02:14:33.002215 kernel: sched_clock: Marking stable (1002720421, 224562154)->(1514327946, -287045371) May 10 02:14:33.002228 kernel: registered taskstats version 1 May 10 02:14:33.002241 kernel: Loading compiled-in X.509 certificates May 10 02:14:33.002262 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 0c62a22cd9157131d2e97d5a2e1bd9023e187117' May 10 02:14:33.002279 kernel: Key type .fscrypt registered May 10 02:14:33.002292 kernel: Key type fscrypt-provisioning registered May 10 02:14:33.002318 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 02:14:33.002332 kernel: ima: Allocated hash algorithm: sha1 May 10 02:14:33.002344 kernel: ima: No architecture policies found May 10 02:14:33.002357 kernel: clk: Disabling unused clocks May 10 02:14:33.002370 kernel: Freeing unused kernel image (initmem) memory: 47456K May 10 02:14:33.002383 kernel: Write protecting the kernel read-only data: 28672k May 10 02:14:33.002396 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 10 02:14:33.002414 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 10 02:14:33.002427 kernel: Run /init as init process May 10 02:14:33.002439 kernel: with arguments: May 10 02:14:33.002452 kernel: /init May 10 02:14:33.002464 kernel: with environment: May 10 02:14:33.002476 kernel: HOME=/ May 10 02:14:33.002488 kernel: TERM=linux May 10 02:14:33.002500 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 02:14:33.002523 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 02:14:33.002546 systemd[1]: Detected virtualization kvm. May 10 02:14:33.002560 systemd[1]: Detected architecture x86-64. May 10 02:14:33.002574 systemd[1]: Running in initrd. May 10 02:14:33.002587 systemd[1]: No hostname configured, using default hostname. May 10 02:14:33.002600 systemd[1]: Hostname set to . May 10 02:14:33.002614 systemd[1]: Initializing machine ID from VM UUID. May 10 02:14:33.002635 systemd[1]: Queued start job for default target initrd.target. May 10 02:14:33.002653 systemd[1]: Started systemd-ask-password-console.path. May 10 02:14:33.002670 systemd[1]: Reached target cryptsetup.target. May 10 02:14:33.002684 systemd[1]: Reached target paths.target. May 10 02:14:33.002697 systemd[1]: Reached target slices.target. May 10 02:14:33.002710 systemd[1]: Reached target swap.target. May 10 02:14:33.002723 systemd[1]: Reached target timers.target. May 10 02:14:33.002737 systemd[1]: Listening on iscsid.socket. May 10 02:14:33.002751 systemd[1]: Listening on iscsiuio.socket. May 10 02:14:33.002769 systemd[1]: Listening on systemd-journald-audit.socket. May 10 02:14:33.002782 systemd[1]: Listening on systemd-journald-dev-log.socket. May 10 02:14:33.002796 systemd[1]: Listening on systemd-journald.socket. May 10 02:14:33.002809 systemd[1]: Listening on systemd-networkd.socket. May 10 02:14:33.002823 systemd[1]: Listening on systemd-udevd-control.socket. May 10 02:14:33.002836 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 02:14:33.002850 systemd[1]: Reached target sockets.target. May 10 02:14:33.002863 systemd[1]: Starting kmod-static-nodes.service... May 10 02:14:33.002877 systemd[1]: Finished network-cleanup.service. May 10 02:14:33.002895 systemd[1]: Starting systemd-fsck-usr.service... May 10 02:14:33.002908 systemd[1]: Starting systemd-journald.service... May 10 02:14:33.002922 systemd[1]: Starting systemd-modules-load.service... May 10 02:14:33.002943 systemd[1]: Starting systemd-resolved.service... May 10 02:14:33.002957 systemd[1]: Starting systemd-vconsole-setup.service... May 10 02:14:33.002970 systemd[1]: Finished kmod-static-nodes.service. May 10 02:14:33.002984 kernel: audit: type=1130 audit(1746843272.926:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.003006 systemd[1]: Finished systemd-fsck-usr.service. May 10 02:14:33.003032 systemd-journald[202]: Journal started May 10 02:14:33.003108 systemd-journald[202]: Runtime Journal (/run/log/journal/6604e0c559554a0c82e6ab29638386ff) is 4.7M, max 38.1M, 33.3M free. May 10 02:14:32.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:32.927116 systemd-modules-load[203]: Inserted module 'overlay' May 10 02:14:33.031415 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 02:14:33.031451 kernel: Bridge firewalling registered May 10 02:14:32.979970 systemd-resolved[204]: Positive Trust Anchors: May 10 02:14:33.039147 systemd[1]: Started systemd-resolved.service. May 10 02:14:33.039173 kernel: audit: type=1130 audit(1746843273.031:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:32.979993 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 02:14:33.048078 systemd[1]: Started systemd-journald.service. May 10 02:14:33.048113 kernel: audit: type=1130 audit(1746843273.039:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.048140 kernel: SCSI subsystem initialized May 10 02:14:33.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:32.980038 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 02:14:33.056044 kernel: audit: type=1130 audit(1746843273.048:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:32.988060 systemd-resolved[204]: Defaulting to hostname 'linux'. May 10 02:14:33.071829 kernel: audit: type=1130 audit(1746843273.056:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.071855 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 02:14:33.071885 kernel: device-mapper: uevent: version 1.0.3 May 10 02:14:33.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.079536 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 10 02:14:33.079565 kernel: audit: type=1130 audit(1746843273.079:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.012497 systemd-modules-load[203]: Inserted module 'br_netfilter' May 10 02:14:33.049123 systemd[1]: Finished systemd-vconsole-setup.service. May 10 02:14:33.056930 systemd[1]: Reached target nss-lookup.target. May 10 02:14:33.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.064373 systemd[1]: Starting dracut-cmdline-ask.service... May 10 02:14:33.095996 kernel: audit: type=1130 audit(1746843273.088:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.066060 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 02:14:33.078744 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 02:14:33.087292 systemd-modules-load[203]: Inserted module 'dm_multipath' May 10 02:14:33.088297 systemd[1]: Finished systemd-modules-load.service. May 10 02:14:33.089876 systemd[1]: Starting systemd-sysctl.service... May 10 02:14:33.103852 systemd[1]: Finished systemd-sysctl.service. May 10 02:14:33.110454 kernel: audit: type=1130 audit(1746843273.104:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.110821 systemd[1]: Finished dracut-cmdline-ask.service. May 10 02:14:33.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.112543 systemd[1]: Starting dracut-cmdline.service... May 10 02:14:33.132801 kernel: audit: type=1130 audit(1746843273.111:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.132829 dracut-cmdline[224]: dracut-dracut-053 May 10 02:14:33.136375 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 02:14:33.226340 kernel: Loading iSCSI transport class v2.0-870. May 10 02:14:33.248333 kernel: iscsi: registered transport (tcp) May 10 02:14:33.279029 kernel: iscsi: registered transport (qla4xxx) May 10 02:14:33.279099 kernel: QLogic iSCSI HBA Driver May 10 02:14:33.330144 systemd[1]: Finished dracut-cmdline.service. May 10 02:14:33.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.332232 systemd[1]: Starting dracut-pre-udev.service... May 10 02:14:33.392358 kernel: raid6: sse2x4 gen() 12963 MB/s May 10 02:14:33.410370 kernel: raid6: sse2x4 xor() 7178 MB/s May 10 02:14:33.428354 kernel: raid6: sse2x2 gen() 8800 MB/s May 10 02:14:33.446367 kernel: raid6: sse2x2 xor() 7603 MB/s May 10 02:14:33.464382 kernel: raid6: sse2x1 gen() 9153 MB/s May 10 02:14:33.483150 kernel: raid6: sse2x1 xor() 6911 MB/s May 10 02:14:33.483226 kernel: raid6: using algorithm sse2x4 gen() 12963 MB/s May 10 02:14:33.483254 kernel: raid6: .... xor() 7178 MB/s, rmw enabled May 10 02:14:33.484475 kernel: raid6: using ssse3x2 recovery algorithm May 10 02:14:33.502337 kernel: xor: automatically using best checksumming function avx May 10 02:14:33.621346 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 10 02:14:33.635892 systemd[1]: Finished dracut-pre-udev.service. May 10 02:14:33.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.636000 audit: BPF prog-id=7 op=LOAD May 10 02:14:33.636000 audit: BPF prog-id=8 op=LOAD May 10 02:14:33.637909 systemd[1]: Starting systemd-udevd.service... May 10 02:14:33.655695 systemd-udevd[401]: Using default interface naming scheme 'v252'. May 10 02:14:33.664778 systemd[1]: Started systemd-udevd.service. May 10 02:14:33.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.666482 systemd[1]: Starting dracut-pre-trigger.service... May 10 02:14:33.685380 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation May 10 02:14:33.725462 systemd[1]: Finished dracut-pre-trigger.service. May 10 02:14:33.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.727255 systemd[1]: Starting systemd-udev-trigger.service... May 10 02:14:33.822140 systemd[1]: Finished systemd-udev-trigger.service. May 10 02:14:33.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:33.910336 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 10 02:14:33.984374 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 02:14:33.984409 kernel: GPT:17805311 != 125829119 May 10 02:14:33.984428 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 02:14:33.984444 kernel: GPT:17805311 != 125829119 May 10 02:14:33.984461 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 02:14:33.984495 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 02:14:33.984515 kernel: cryptd: max_cpu_qlen set to 1000 May 10 02:14:33.984532 kernel: AVX version of gcm_enc/dec engaged. May 10 02:14:33.984548 kernel: AES CTR mode by8 optimization enabled May 10 02:14:33.984564 kernel: libata version 3.00 loaded. May 10 02:14:33.998944 kernel: ahci 0000:00:1f.2: version 3.0 May 10 02:14:34.068694 kernel: ACPI: bus type USB registered May 10 02:14:34.068720 kernel: usbcore: registered new interface driver usbfs May 10 02:14:34.068750 kernel: usbcore: registered new interface driver hub May 10 02:14:34.068775 kernel: usbcore: registered new device driver usb May 10 02:14:34.068793 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) May 10 02:14:34.068822 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 10 02:14:34.068852 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 10 02:14:34.069074 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 10 02:14:34.069268 kernel: scsi host0: ahci May 10 02:14:34.069524 kernel: scsi host1: ahci May 10 02:14:34.069711 kernel: scsi host2: ahci May 10 02:14:34.069921 kernel: scsi host3: ahci May 10 02:14:34.070116 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller May 10 02:14:34.070361 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 May 10 02:14:34.070559 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 10 02:14:34.070738 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller May 10 02:14:34.070928 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 May 10 02:14:34.071114 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed May 10 02:14:34.071354 kernel: hub 1-0:1.0: USB hub found May 10 02:14:34.071963 kernel: hub 1-0:1.0: 4 ports detected May 10 02:14:34.072169 kernel: scsi host4: ahci May 10 02:14:34.072393 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 10 02:14:34.072611 kernel: hub 2-0:1.0: USB hub found May 10 02:14:34.072935 kernel: hub 2-0:1.0: 4 ports detected May 10 02:14:34.073145 kernel: scsi host5: ahci May 10 02:14:34.073369 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 May 10 02:14:34.073390 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 May 10 02:14:34.073408 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 May 10 02:14:34.073424 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 May 10 02:14:34.073441 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 May 10 02:14:34.073458 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 May 10 02:14:34.025237 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 10 02:14:34.161084 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 10 02:14:34.169440 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 10 02:14:34.170390 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 10 02:14:34.176792 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 02:14:34.178928 systemd[1]: Starting disk-uuid.service... May 10 02:14:34.186847 disk-uuid[528]: Primary Header is updated. May 10 02:14:34.186847 disk-uuid[528]: Secondary Entries is updated. May 10 02:14:34.186847 disk-uuid[528]: Secondary Header is updated. May 10 02:14:34.191334 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 02:14:34.306329 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 10 02:14:34.374379 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 10 02:14:34.382329 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 10 02:14:34.382368 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 10 02:14:34.385498 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 10 02:14:34.387186 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 10 02:14:34.389332 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 10 02:14:34.451331 kernel: hid: raw HID events driver (C) Jiri Kosina May 10 02:14:34.458933 kernel: usbcore: registered new interface driver usbhid May 10 02:14:34.458968 kernel: usbhid: USB HID core driver May 10 02:14:34.468743 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 May 10 02:14:34.468799 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 May 10 02:14:35.206687 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 02:14:35.207899 disk-uuid[529]: The operation has completed successfully. May 10 02:14:35.217342 kernel: block device autoloading is deprecated. It will be removed in Linux 5.19 May 10 02:14:35.268061 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 02:14:35.268260 systemd[1]: Finished disk-uuid.service. May 10 02:14:35.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:35.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:35.274729 systemd[1]: Starting verity-setup.service... May 10 02:14:35.296335 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" May 10 02:14:35.351002 systemd[1]: Found device dev-mapper-usr.device. May 10 02:14:35.352841 systemd[1]: Mounting sysusr-usr.mount... May 10 02:14:35.354812 systemd[1]: Finished verity-setup.service. May 10 02:14:35.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:35.451350 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 10 02:14:35.451918 systemd[1]: Mounted sysusr-usr.mount. May 10 02:14:35.452791 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 10 02:14:35.453847 systemd[1]: Starting ignition-setup.service... May 10 02:14:35.456863 systemd[1]: Starting parse-ip-for-networkd.service... May 10 02:14:35.474570 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 02:14:35.474618 kernel: BTRFS info (device vda6): using free space tree May 10 02:14:35.474638 kernel: BTRFS info (device vda6): has skinny extents May 10 02:14:35.498040 systemd[1]: mnt-oem.mount: Deactivated successfully. May 10 02:14:35.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:35.506026 systemd[1]: Finished ignition-setup.service. May 10 02:14:35.507832 systemd[1]: Starting ignition-fetch-offline.service... May 10 02:14:35.599842 systemd[1]: Finished parse-ip-for-networkd.service. May 10 02:14:35.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:35.605000 audit: BPF prog-id=9 op=LOAD May 10 02:14:35.607403 systemd[1]: Starting systemd-networkd.service... May 10 02:14:35.652535 systemd-networkd[711]: lo: Link UP May 10 02:14:35.652548 systemd-networkd[711]: lo: Gained carrier May 10 02:14:35.653523 systemd-networkd[711]: Enumeration completed May 10 02:14:35.653969 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 02:14:35.655594 systemd-networkd[711]: eth0: Link UP May 10 02:14:35.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:35.655600 systemd-networkd[711]: eth0: Gained carrier May 10 02:14:35.656028 systemd[1]: Started systemd-networkd.service. May 10 02:14:35.659187 systemd[1]: Reached target network.target. May 10 02:14:35.661818 systemd[1]: Starting iscsiuio.service... May 10 02:14:35.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:35.676711 systemd[1]: Started iscsiuio.service. May 10 02:14:35.678609 systemd[1]: Starting iscsid.service... May 10 02:14:35.684922 iscsid[716]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 10 02:14:35.684922 iscsid[716]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 10 02:14:35.684922 iscsid[716]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 10 02:14:35.684922 iscsid[716]: If using hardware iscsi like qla4xxx this message can be ignored. May 10 02:14:35.684922 iscsid[716]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 10 02:14:35.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:35.701324 iscsid[716]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 10 02:14:35.688054 systemd[1]: Started iscsid.service. May 10 02:14:35.689447 systemd-networkd[711]: eth0: DHCPv4 address 10.230.33.70/30, gateway 10.230.33.69 acquired from 10.230.33.69 May 10 02:14:35.693614 systemd[1]: Starting dracut-initqueue.service... May 10 02:14:35.710409 systemd[1]: Finished dracut-initqueue.service. May 10 02:14:35.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:35.711240 systemd[1]: Reached target remote-fs-pre.target. May 10 02:14:35.713153 systemd[1]: Reached target remote-cryptsetup.target. May 10 02:14:35.714883 systemd[1]: Reached target remote-fs.target. May 10 02:14:35.717520 systemd[1]: Starting dracut-pre-mount.service... May 10 02:14:35.733386 systemd[1]: Finished dracut-pre-mount.service. May 10 02:14:35.734665 ignition[637]: Ignition 2.14.0 May 10 02:14:35.734685 ignition[637]: Stage: fetch-offline May 10 02:14:35.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:35.734806 ignition[637]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 02:14:35.734909 ignition[637]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 02:14:35.736488 ignition[637]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 02:14:35.736640 ignition[637]: parsed url from cmdline: "" May 10 02:14:35.737979 systemd[1]: Finished ignition-fetch-offline.service. May 10 02:14:35.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:35.736647 ignition[637]: no config URL provided May 10 02:14:35.736658 ignition[637]: reading system config file "/usr/lib/ignition/user.ign" May 10 02:14:35.740598 systemd[1]: Starting ignition-fetch.service... May 10 02:14:35.736673 ignition[637]: no config at "/usr/lib/ignition/user.ign" May 10 02:14:35.736693 ignition[637]: failed to fetch config: resource requires networking May 10 02:14:35.737035 ignition[637]: Ignition finished successfully May 10 02:14:35.752917 ignition[730]: Ignition 2.14.0 May 10 02:14:35.752940 ignition[730]: Stage: fetch May 10 02:14:35.753098 ignition[730]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 02:14:35.753141 ignition[730]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 02:14:35.754569 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 02:14:35.754715 ignition[730]: parsed url from cmdline: "" May 10 02:14:35.754722 ignition[730]: no config URL provided May 10 02:14:35.754732 ignition[730]: reading system config file "/usr/lib/ignition/user.ign" May 10 02:14:35.754747 ignition[730]: no config at "/usr/lib/ignition/user.ign" May 10 02:14:35.760292 ignition[730]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 10 02:14:35.760349 ignition[730]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 10 02:14:35.760476 ignition[730]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 10 02:14:35.781103 ignition[730]: GET result: OK May 10 02:14:35.782042 ignition[730]: parsing config with SHA512: 6e38f409eb8d6117f1a49bcdee6579e7633bf8b1f62eb7848f5b5878eefc74dcae539ba53b49a39791309df223549c130d456db071c1b15a0054913c64669512 May 10 02:14:35.791060 unknown[730]: fetched base config from "system" May 10 02:14:35.791924 unknown[730]: fetched base config from "system" May 10 02:14:35.792730 unknown[730]: fetched user config from "openstack" May 10 02:14:35.794042 ignition[730]: fetch: fetch complete May 10 02:14:35.794780 ignition[730]: fetch: fetch passed May 10 02:14:35.795579 ignition[730]: Ignition finished successfully May 10 02:14:35.797897 systemd[1]: Finished ignition-fetch.service. May 10 02:14:35.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:35.799844 systemd[1]: Starting ignition-kargs.service... May 10 02:14:35.811768 ignition[736]: Ignition 2.14.0 May 10 02:14:35.811785 ignition[736]: Stage: kargs May 10 02:14:35.811938 ignition[736]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 02:14:35.811971 ignition[736]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 02:14:35.813232 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 02:14:35.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:35.815885 systemd[1]: Finished ignition-kargs.service. May 10 02:14:35.814889 ignition[736]: kargs: kargs passed May 10 02:14:35.814948 ignition[736]: Ignition finished successfully May 10 02:14:35.818024 systemd[1]: Starting ignition-disks.service... May 10 02:14:35.829192 ignition[741]: Ignition 2.14.0 May 10 02:14:35.829212 ignition[741]: Stage: disks May 10 02:14:35.829383 ignition[741]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 02:14:35.829416 ignition[741]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 02:14:35.830761 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 02:14:35.832499 ignition[741]: disks: disks passed May 10 02:14:35.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:35.833393 systemd[1]: Finished ignition-disks.service. May 10 02:14:35.832558 ignition[741]: Ignition finished successfully May 10 02:14:35.834419 systemd[1]: Reached target initrd-root-device.target. May 10 02:14:35.835482 systemd[1]: Reached target local-fs-pre.target. May 10 02:14:35.836742 systemd[1]: Reached target local-fs.target. May 10 02:14:35.837993 systemd[1]: Reached target sysinit.target. May 10 02:14:35.839390 systemd[1]: Reached target basic.target. May 10 02:14:35.841786 systemd[1]: Starting systemd-fsck-root.service... May 10 02:14:35.863370 systemd-fsck[749]: ROOT: clean, 623/1628000 files, 124060/1617920 blocks May 10 02:14:35.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:35.867560 systemd[1]: Finished systemd-fsck-root.service. May 10 02:14:35.869110 systemd[1]: Mounting sysroot.mount... May 10 02:14:35.881340 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 10 02:14:35.882087 systemd[1]: Mounted sysroot.mount. May 10 02:14:35.882890 systemd[1]: Reached target initrd-root-fs.target. May 10 02:14:35.885608 systemd[1]: Mounting sysroot-usr.mount... May 10 02:14:35.886872 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 10 02:14:35.888008 systemd[1]: Starting flatcar-openstack-hostname.service... May 10 02:14:35.891173 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 02:14:35.891238 systemd[1]: Reached target ignition-diskful.target. May 10 02:14:35.897206 systemd[1]: Mounted sysroot-usr.mount. May 10 02:14:35.900282 systemd[1]: Starting initrd-setup-root.service... May 10 02:14:35.914011 initrd-setup-root[760]: cut: /sysroot/etc/passwd: No such file or directory May 10 02:14:35.926660 initrd-setup-root[768]: cut: /sysroot/etc/group: No such file or directory May 10 02:14:35.938615 initrd-setup-root[777]: cut: /sysroot/etc/shadow: No such file or directory May 10 02:14:35.948054 initrd-setup-root[785]: cut: /sysroot/etc/gshadow: No such file or directory May 10 02:14:36.011498 systemd[1]: Finished initrd-setup-root.service. May 10 02:14:36.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:36.013399 systemd[1]: Starting ignition-mount.service... May 10 02:14:36.015051 systemd[1]: Starting sysroot-boot.service... May 10 02:14:36.029224 bash[803]: umount: /sysroot/usr/share/oem: not mounted. May 10 02:14:36.040993 ignition[805]: INFO : Ignition 2.14.0 May 10 02:14:36.040993 ignition[805]: INFO : Stage: mount May 10 02:14:36.042657 ignition[805]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 02:14:36.042657 ignition[805]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 02:14:36.044897 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 02:14:36.045861 ignition[805]: INFO : mount: mount passed May 10 02:14:36.045861 ignition[805]: INFO : Ignition finished successfully May 10 02:14:36.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:36.045782 systemd[1]: Finished ignition-mount.service. May 10 02:14:36.052535 coreos-metadata[755]: May 10 02:14:36.052 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 10 02:14:36.062776 systemd[1]: Finished sysroot-boot.service. May 10 02:14:36.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:36.068828 coreos-metadata[755]: May 10 02:14:36.068 INFO Fetch successful May 10 02:14:36.069688 coreos-metadata[755]: May 10 02:14:36.069 INFO wrote hostname srv-it8yl.gb1.brightbox.com to /sysroot/etc/hostname May 10 02:14:36.071838 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 10 02:14:36.071994 systemd[1]: Finished flatcar-openstack-hostname.service. May 10 02:14:36.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:36.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:36.373616 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 10 02:14:36.395915 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (812) May 10 02:14:36.395956 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 02:14:36.395975 kernel: BTRFS info (device vda6): using free space tree May 10 02:14:36.395992 kernel: BTRFS info (device vda6): has skinny extents May 10 02:14:36.400375 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 10 02:14:36.402140 systemd[1]: Starting ignition-files.service... May 10 02:14:36.423444 ignition[832]: INFO : Ignition 2.14.0 May 10 02:14:36.424474 ignition[832]: INFO : Stage: files May 10 02:14:36.425394 ignition[832]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 02:14:36.426448 ignition[832]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 02:14:36.427862 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 02:14:36.430243 ignition[832]: DEBUG : files: compiled without relabeling support, skipping May 10 02:14:36.431398 ignition[832]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 02:14:36.431398 ignition[832]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 02:14:36.434597 ignition[832]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 02:14:36.435846 ignition[832]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 02:14:36.435846 ignition[832]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 02:14:36.435719 unknown[832]: wrote ssh authorized keys file for user: core May 10 02:14:36.438684 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 10 02:14:36.438684 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 10 02:14:36.438684 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 02:14:36.438684 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 10 02:14:36.828007 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 10 02:14:36.862573 systemd-networkd[711]: eth0: Gained IPv6LL May 10 02:14:37.705443 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 02:14:37.707357 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 10 02:14:37.707357 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 10 02:14:37.707357 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 02:14:37.707357 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 02:14:37.707357 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 02:14:37.707357 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 02:14:37.707357 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 02:14:37.715252 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 02:14:37.715252 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 02:14:37.715252 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 02:14:37.715252 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 02:14:37.715252 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 02:14:37.715252 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 02:14:37.715252 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 10 02:14:37.791180 systemd-networkd[711]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8851:24:19ff:fee6:2146/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8851:24:19ff:fee6:2146/64 assigned by NDisc. May 10 02:14:37.791205 systemd-networkd[711]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. May 10 02:14:38.500841 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 10 02:14:39.955851 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 02:14:39.957772 ignition[832]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" May 10 02:14:39.957772 ignition[832]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" May 10 02:14:39.957772 ignition[832]: INFO : files: op(d): [started] processing unit "containerd.service" May 10 02:14:39.957772 ignition[832]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 10 02:14:39.957772 ignition[832]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 10 02:14:39.957772 ignition[832]: INFO : files: op(d): [finished] processing unit "containerd.service" May 10 02:14:39.957772 ignition[832]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 10 02:14:39.957772 ignition[832]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 02:14:39.968117 ignition[832]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 02:14:39.968117 ignition[832]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 10 02:14:39.968117 ignition[832]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 10 02:14:39.968117 ignition[832]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 10 02:14:39.968117 ignition[832]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 10 02:14:39.968117 ignition[832]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 10 02:14:39.968117 ignition[832]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 02:14:39.968117 ignition[832]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 02:14:39.968117 ignition[832]: INFO : files: files passed May 10 02:14:39.968117 ignition[832]: INFO : Ignition finished successfully May 10 02:14:40.010173 kernel: kauditd_printk_skb: 28 callbacks suppressed May 10 02:14:40.010205 kernel: audit: type=1130 audit(1746843279.971:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.010252 kernel: audit: type=1130 audit(1746843279.989:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.010274 kernel: audit: type=1131 audit(1746843279.989:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.010292 kernel: audit: type=1130 audit(1746843280.000:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:39.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:39.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:39.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:39.969007 systemd[1]: Finished ignition-files.service. May 10 02:14:39.974293 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 10 02:14:40.012566 initrd-setup-root-after-ignition[857]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 02:14:39.982101 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 10 02:14:39.983966 systemd[1]: Starting ignition-quench.service... May 10 02:14:39.988710 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 02:14:39.988848 systemd[1]: Finished ignition-quench.service. May 10 02:14:39.989955 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 10 02:14:40.000470 systemd[1]: Reached target ignition-complete.target. May 10 02:14:40.008016 systemd[1]: Starting initrd-parse-etc.service... May 10 02:14:40.027578 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 02:14:40.027735 systemd[1]: Finished initrd-parse-etc.service. May 10 02:14:40.039566 kernel: audit: type=1130 audit(1746843280.028:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.039605 kernel: audit: type=1131 audit(1746843280.028:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.029207 systemd[1]: Reached target initrd-fs.target. May 10 02:14:40.040237 systemd[1]: Reached target initrd.target. May 10 02:14:40.041535 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 10 02:14:40.042600 systemd[1]: Starting dracut-pre-pivot.service... May 10 02:14:40.060206 systemd[1]: Finished dracut-pre-pivot.service. May 10 02:14:40.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.066756 systemd[1]: Starting initrd-cleanup.service... May 10 02:14:40.067527 kernel: audit: type=1130 audit(1746843280.061:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.079550 systemd[1]: Stopped target nss-lookup.target. May 10 02:14:40.081098 systemd[1]: Stopped target remote-cryptsetup.target. May 10 02:14:40.082700 systemd[1]: Stopped target timers.target. May 10 02:14:40.084132 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 02:14:40.085192 systemd[1]: Stopped dracut-pre-pivot.service. May 10 02:14:40.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.086914 systemd[1]: Stopped target initrd.target. May 10 02:14:40.092544 kernel: audit: type=1131 audit(1746843280.086:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.093231 systemd[1]: Stopped target basic.target. May 10 02:14:40.093991 systemd[1]: Stopped target ignition-complete.target. May 10 02:14:40.095354 systemd[1]: Stopped target ignition-diskful.target. May 10 02:14:40.096760 systemd[1]: Stopped target initrd-root-device.target. May 10 02:14:40.098175 systemd[1]: Stopped target remote-fs.target. May 10 02:14:40.099722 systemd[1]: Stopped target remote-fs-pre.target. May 10 02:14:40.100988 systemd[1]: Stopped target sysinit.target. May 10 02:14:40.102279 systemd[1]: Stopped target local-fs.target. May 10 02:14:40.103510 systemd[1]: Stopped target local-fs-pre.target. May 10 02:14:40.104770 systemd[1]: Stopped target swap.target. May 10 02:14:40.105944 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 02:14:40.112657 kernel: audit: type=1131 audit(1746843280.106:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.106196 systemd[1]: Stopped dracut-pre-mount.service. May 10 02:14:40.107417 systemd[1]: Stopped target cryptsetup.target. May 10 02:14:40.120008 kernel: audit: type=1131 audit(1746843280.114:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.113427 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 02:14:40.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.113662 systemd[1]: Stopped dracut-initqueue.service. May 10 02:14:40.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.114825 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 02:14:40.115060 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 10 02:14:40.120968 systemd[1]: ignition-files.service: Deactivated successfully. May 10 02:14:40.121196 systemd[1]: Stopped ignition-files.service. May 10 02:14:40.123435 systemd[1]: Stopping ignition-mount.service... May 10 02:14:40.124592 systemd[1]: Stopping iscsiuio.service... May 10 02:14:40.133559 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 02:14:40.134646 systemd[1]: Stopped kmod-static-nodes.service. May 10 02:14:40.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.137668 systemd[1]: Stopping sysroot-boot.service... May 10 02:14:40.139043 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 02:14:40.140218 systemd[1]: Stopped systemd-udev-trigger.service. May 10 02:14:40.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.141979 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 02:14:40.143186 systemd[1]: Stopped dracut-pre-trigger.service. May 10 02:14:40.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.146312 ignition[870]: INFO : Ignition 2.14.0 May 10 02:14:40.146312 ignition[870]: INFO : Stage: umount May 10 02:14:40.147910 ignition[870]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 02:14:40.147910 ignition[870]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 02:14:40.152150 systemd[1]: iscsiuio.service: Deactivated successfully. May 10 02:14:40.153698 ignition[870]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 02:14:40.152295 systemd[1]: Stopped iscsiuio.service. May 10 02:14:40.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.156706 ignition[870]: INFO : umount: umount passed May 10 02:14:40.156706 ignition[870]: INFO : Ignition finished successfully May 10 02:14:40.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.157745 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 02:14:40.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.157870 systemd[1]: Finished initrd-cleanup.service. May 10 02:14:40.159766 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 02:14:40.159890 systemd[1]: Stopped ignition-mount.service. May 10 02:14:40.164653 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 02:14:40.166229 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 02:14:40.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.166332 systemd[1]: Stopped ignition-disks.service. May 10 02:14:40.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.167551 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 02:14:40.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.167617 systemd[1]: Stopped ignition-kargs.service. May 10 02:14:40.168865 systemd[1]: ignition-fetch.service: Deactivated successfully. May 10 02:14:40.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.168926 systemd[1]: Stopped ignition-fetch.service. May 10 02:14:40.170132 systemd[1]: Stopped target network.target. May 10 02:14:40.172179 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 02:14:40.172247 systemd[1]: Stopped ignition-fetch-offline.service. May 10 02:14:40.173671 systemd[1]: Stopped target paths.target. May 10 02:14:40.174982 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 02:14:40.179441 systemd[1]: Stopped systemd-ask-password-console.path. May 10 02:14:40.180221 systemd[1]: Stopped target slices.target. May 10 02:14:40.182101 systemd[1]: Stopped target sockets.target. May 10 02:14:40.183394 systemd[1]: iscsid.socket: Deactivated successfully. May 10 02:14:40.183437 systemd[1]: Closed iscsid.socket. May 10 02:14:40.184583 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 02:14:40.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.184641 systemd[1]: Closed iscsiuio.socket. May 10 02:14:40.185805 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 02:14:40.185865 systemd[1]: Stopped ignition-setup.service. May 10 02:14:40.187268 systemd[1]: Stopping systemd-networkd.service... May 10 02:14:40.188920 systemd[1]: Stopping systemd-resolved.service... May 10 02:14:40.191781 systemd-networkd[711]: eth0: DHCPv6 lease lost May 10 02:14:40.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.193114 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 02:14:40.195000 audit: BPF prog-id=9 op=UNLOAD May 10 02:14:40.193250 systemd[1]: Stopped systemd-networkd.service. May 10 02:14:40.194778 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 02:14:40.194834 systemd[1]: Closed systemd-networkd.socket. May 10 02:14:40.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.196737 systemd[1]: Stopping network-cleanup.service... May 10 02:14:40.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.200014 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 02:14:40.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.200132 systemd[1]: Stopped parse-ip-for-networkd.service. May 10 02:14:40.201704 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 02:14:40.201774 systemd[1]: Stopped systemd-sysctl.service. May 10 02:14:40.203312 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 02:14:40.203381 systemd[1]: Stopped systemd-modules-load.service. May 10 02:14:40.204385 systemd[1]: Stopping systemd-udevd.service... May 10 02:14:40.213767 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 02:14:40.214852 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 02:14:40.215026 systemd[1]: Stopped systemd-resolved.service. May 10 02:14:40.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.217644 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 02:14:40.217851 systemd[1]: Stopped systemd-udevd.service. May 10 02:14:40.221000 audit: BPF prog-id=6 op=UNLOAD May 10 02:14:40.219970 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 02:14:40.220110 systemd[1]: Closed systemd-udevd-control.socket. May 10 02:14:40.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.222507 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 02:14:40.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.222582 systemd[1]: Closed systemd-udevd-kernel.socket. May 10 02:14:40.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.225768 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 02:14:40.225836 systemd[1]: Stopped dracut-pre-udev.service. May 10 02:14:40.227185 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 02:14:40.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.227250 systemd[1]: Stopped dracut-cmdline.service. May 10 02:14:40.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.228369 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 02:14:40.228429 systemd[1]: Stopped dracut-cmdline-ask.service. May 10 02:14:40.230838 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 10 02:14:40.246085 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 02:14:40.246160 systemd[1]: Stopped systemd-vconsole-setup.service. May 10 02:14:40.247985 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 02:14:40.248149 systemd[1]: Stopped network-cleanup.service. May 10 02:14:40.249128 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 02:14:40.249258 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 10 02:14:40.333804 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 02:14:40.333964 systemd[1]: Stopped sysroot-boot.service. May 10 02:14:40.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.335564 systemd[1]: Reached target initrd-switch-root.target. May 10 02:14:40.336692 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 02:14:40.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:40.336783 systemd[1]: Stopped initrd-setup-root.service. May 10 02:14:40.339119 systemd[1]: Starting initrd-switch-root.service... May 10 02:14:40.349864 systemd[1]: Switching root. May 10 02:14:40.353000 audit: BPF prog-id=8 op=UNLOAD May 10 02:14:40.353000 audit: BPF prog-id=7 op=UNLOAD May 10 02:14:40.355000 audit: BPF prog-id=5 op=UNLOAD May 10 02:14:40.355000 audit: BPF prog-id=4 op=UNLOAD May 10 02:14:40.355000 audit: BPF prog-id=3 op=UNLOAD May 10 02:14:40.368876 iscsid[716]: iscsid shutting down. May 10 02:14:40.369613 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). May 10 02:14:40.369694 systemd-journald[202]: Journal stopped May 10 02:14:44.477504 kernel: SELinux: Class mctp_socket not defined in policy. May 10 02:14:44.477639 kernel: SELinux: Class anon_inode not defined in policy. May 10 02:14:44.477667 kernel: SELinux: the above unknown classes and permissions will be allowed May 10 02:14:44.477687 kernel: SELinux: policy capability network_peer_controls=1 May 10 02:14:44.477712 kernel: SELinux: policy capability open_perms=1 May 10 02:14:44.477744 kernel: SELinux: policy capability extended_socket_class=1 May 10 02:14:44.477764 kernel: SELinux: policy capability always_check_network=0 May 10 02:14:44.477786 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 02:14:44.477811 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 02:14:44.477852 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 02:14:44.477874 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 02:14:44.477903 systemd[1]: Successfully loaded SELinux policy in 63.428ms. May 10 02:14:44.477953 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.407ms. May 10 02:14:44.477982 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 02:14:44.478011 systemd[1]: Detected virtualization kvm. May 10 02:14:44.478032 systemd[1]: Detected architecture x86-64. May 10 02:14:44.478052 systemd[1]: Detected first boot. May 10 02:14:44.478084 systemd[1]: Hostname set to . May 10 02:14:44.478108 systemd[1]: Initializing machine ID from VM UUID. May 10 02:14:44.478128 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 10 02:14:44.478163 systemd[1]: Populated /etc with preset unit settings. May 10 02:14:44.478198 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 02:14:44.478219 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 02:14:44.478241 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 02:14:44.478290 systemd[1]: Queued start job for default target multi-user.target. May 10 02:14:44.478328 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 10 02:14:44.481163 systemd[1]: Created slice system-addon\x2dconfig.slice. May 10 02:14:44.481206 systemd[1]: Created slice system-addon\x2drun.slice. May 10 02:14:44.481227 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 10 02:14:44.481252 systemd[1]: Created slice system-getty.slice. May 10 02:14:44.481280 systemd[1]: Created slice system-modprobe.slice. May 10 02:14:44.481350 systemd[1]: Created slice system-serial\x2dgetty.slice. May 10 02:14:44.481383 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 10 02:14:44.481404 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 10 02:14:44.481425 systemd[1]: Created slice user.slice. May 10 02:14:44.481451 systemd[1]: Started systemd-ask-password-console.path. May 10 02:14:44.481471 systemd[1]: Started systemd-ask-password-wall.path. May 10 02:14:44.481512 systemd[1]: Set up automount boot.automount. May 10 02:14:44.481540 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 10 02:14:44.481561 systemd[1]: Reached target integritysetup.target. May 10 02:14:44.481608 systemd[1]: Reached target remote-cryptsetup.target. May 10 02:14:44.481632 systemd[1]: Reached target remote-fs.target. May 10 02:14:44.481658 systemd[1]: Reached target slices.target. May 10 02:14:44.482151 systemd[1]: Reached target swap.target. May 10 02:14:44.482178 systemd[1]: Reached target torcx.target. May 10 02:14:44.482199 systemd[1]: Reached target veritysetup.target. May 10 02:14:44.482220 systemd[1]: Listening on systemd-coredump.socket. May 10 02:14:44.482260 systemd[1]: Listening on systemd-initctl.socket. May 10 02:14:44.482283 systemd[1]: Listening on systemd-journald-audit.socket. May 10 02:14:44.482335 systemd[1]: Listening on systemd-journald-dev-log.socket. May 10 02:14:44.482359 systemd[1]: Listening on systemd-journald.socket. May 10 02:14:44.482382 systemd[1]: Listening on systemd-networkd.socket. May 10 02:14:44.482402 systemd[1]: Listening on systemd-udevd-control.socket. May 10 02:14:44.482421 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 02:14:44.482441 systemd[1]: Listening on systemd-userdbd.socket. May 10 02:14:44.482460 systemd[1]: Mounting dev-hugepages.mount... May 10 02:14:44.482480 systemd[1]: Mounting dev-mqueue.mount... May 10 02:14:44.482516 systemd[1]: Mounting media.mount... May 10 02:14:44.482538 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 02:14:44.482558 systemd[1]: Mounting sys-kernel-debug.mount... May 10 02:14:44.482593 systemd[1]: Mounting sys-kernel-tracing.mount... May 10 02:14:44.482613 systemd[1]: Mounting tmp.mount... May 10 02:14:44.482645 systemd[1]: Starting flatcar-tmpfiles.service... May 10 02:14:44.482665 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 02:14:44.482697 systemd[1]: Starting kmod-static-nodes.service... May 10 02:14:44.482718 systemd[1]: Starting modprobe@configfs.service... May 10 02:14:44.482749 systemd[1]: Starting modprobe@dm_mod.service... May 10 02:14:44.482771 systemd[1]: Starting modprobe@drm.service... May 10 02:14:44.482797 systemd[1]: Starting modprobe@efi_pstore.service... May 10 02:14:44.482829 systemd[1]: Starting modprobe@fuse.service... May 10 02:14:44.482851 systemd[1]: Starting modprobe@loop.service... May 10 02:14:44.482872 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 02:14:44.482894 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 10 02:14:44.482930 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 10 02:14:44.482964 systemd[1]: Starting systemd-journald.service... May 10 02:14:44.482987 systemd[1]: Starting systemd-modules-load.service... May 10 02:14:44.483007 kernel: loop: module loaded May 10 02:14:44.483033 systemd[1]: Starting systemd-network-generator.service... May 10 02:14:44.483053 kernel: fuse: init (API version 7.34) May 10 02:14:44.483073 systemd[1]: Starting systemd-remount-fs.service... May 10 02:14:44.483093 systemd[1]: Starting systemd-udev-trigger.service... May 10 02:14:44.483120 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 02:14:44.483157 systemd[1]: Mounted dev-hugepages.mount. May 10 02:14:44.483202 systemd[1]: Mounted dev-mqueue.mount. May 10 02:14:44.483223 systemd[1]: Mounted media.mount. May 10 02:14:44.483255 systemd[1]: Mounted sys-kernel-debug.mount. May 10 02:14:44.483274 systemd[1]: Mounted sys-kernel-tracing.mount. May 10 02:14:44.483322 systemd-journald[1013]: Journal started May 10 02:14:44.483409 systemd-journald[1013]: Runtime Journal (/run/log/journal/6604e0c559554a0c82e6ab29638386ff) is 4.7M, max 38.1M, 33.3M free. May 10 02:14:44.277000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 10 02:14:44.472000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 10 02:14:44.472000 audit[1013]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffce0376fc0 a2=4000 a3=7ffce037705c items=0 ppid=1 pid=1013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:14:44.472000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 10 02:14:44.501345 systemd[1]: Started systemd-journald.service. May 10 02:14:44.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.489576 systemd[1]: Mounted tmp.mount. May 10 02:14:44.490618 systemd[1]: Finished kmod-static-nodes.service. May 10 02:14:44.492898 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 02:14:44.493144 systemd[1]: Finished modprobe@configfs.service. May 10 02:14:44.495830 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 02:14:44.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.496103 systemd[1]: Finished modprobe@dm_mod.service. May 10 02:14:44.501564 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 02:14:44.501826 systemd[1]: Finished modprobe@drm.service. May 10 02:14:44.502958 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 02:14:44.504628 systemd[1]: Finished modprobe@efi_pstore.service. May 10 02:14:44.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.505688 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 02:14:44.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.506165 systemd[1]: Finished modprobe@fuse.service. May 10 02:14:44.507286 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 02:14:44.507701 systemd[1]: Finished modprobe@loop.service. May 10 02:14:44.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.510504 systemd[1]: Finished systemd-modules-load.service. May 10 02:14:44.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.511600 systemd[1]: Finished systemd-network-generator.service. May 10 02:14:44.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.512728 systemd[1]: Finished systemd-remount-fs.service. May 10 02:14:44.514145 systemd[1]: Reached target network-pre.target. May 10 02:14:44.516594 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 10 02:14:44.519292 systemd[1]: Mounting sys-kernel-config.mount... May 10 02:14:44.525682 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 02:14:44.528557 systemd[1]: Starting systemd-hwdb-update.service... May 10 02:14:44.534122 systemd[1]: Starting systemd-journal-flush.service... May 10 02:14:44.540844 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 02:14:44.542863 systemd[1]: Starting systemd-random-seed.service... May 10 02:14:44.543134 systemd-journald[1013]: Time spent on flushing to /var/log/journal/6604e0c559554a0c82e6ab29638386ff is 57.449ms for 1217 entries. May 10 02:14:44.543134 systemd-journald[1013]: System Journal (/var/log/journal/6604e0c559554a0c82e6ab29638386ff) is 8.0M, max 584.8M, 576.8M free. May 10 02:14:44.607753 systemd-journald[1013]: Received client request to flush runtime journal. May 10 02:14:44.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.545857 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 02:14:44.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.548358 systemd[1]: Starting systemd-sysctl.service... May 10 02:14:44.552030 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 10 02:14:44.553692 systemd[1]: Mounted sys-kernel-config.mount. May 10 02:14:44.562231 systemd[1]: Finished systemd-random-seed.service. May 10 02:14:44.563088 systemd[1]: Reached target first-boot-complete.target. May 10 02:14:44.590352 systemd[1]: Finished systemd-sysctl.service. May 10 02:14:44.608865 systemd[1]: Finished systemd-journal-flush.service. May 10 02:14:44.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.611503 systemd[1]: Finished flatcar-tmpfiles.service. May 10 02:14:44.614243 systemd[1]: Starting systemd-sysusers.service... May 10 02:14:44.663636 systemd[1]: Finished systemd-sysusers.service. May 10 02:14:44.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.668696 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 02:14:44.713450 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 02:14:44.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.732992 systemd[1]: Finished systemd-udev-trigger.service. May 10 02:14:44.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:44.735535 systemd[1]: Starting systemd-udev-settle.service... May 10 02:14:44.748725 udevadm[1067]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 10 02:14:45.225011 systemd[1]: Finished systemd-hwdb-update.service. May 10 02:14:45.233000 kernel: kauditd_printk_skb: 76 callbacks suppressed May 10 02:14:45.233062 kernel: audit: type=1130 audit(1746843285.225:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:45.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:45.227749 systemd[1]: Starting systemd-udevd.service... May 10 02:14:45.258850 systemd-udevd[1069]: Using default interface naming scheme 'v252'. May 10 02:14:45.290871 systemd[1]: Started systemd-udevd.service. May 10 02:14:45.299846 kernel: audit: type=1130 audit(1746843285.291:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:45.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:45.294106 systemd[1]: Starting systemd-networkd.service... May 10 02:14:45.307796 systemd[1]: Starting systemd-userdbd.service... May 10 02:14:45.370257 systemd[1]: Found device dev-ttyS0.device. May 10 02:14:45.381960 systemd[1]: Started systemd-userdbd.service. May 10 02:14:45.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:45.388326 kernel: audit: type=1130 audit(1746843285.382:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:45.457694 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 02:14:45.497343 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 10 02:14:45.516428 kernel: ACPI: button: Power Button [PWRF] May 10 02:14:45.549269 systemd-networkd[1074]: lo: Link UP May 10 02:14:45.549283 systemd-networkd[1074]: lo: Gained carrier May 10 02:14:45.550126 systemd-networkd[1074]: Enumeration completed May 10 02:14:45.550320 systemd[1]: Started systemd-networkd.service. May 10 02:14:45.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:45.551282 systemd-networkd[1074]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 02:14:45.557376 kernel: audit: type=1130 audit(1746843285.550:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:45.558835 systemd-networkd[1074]: eth0: Link UP May 10 02:14:45.558856 systemd-networkd[1074]: eth0: Gained carrier May 10 02:14:45.571338 kernel: mousedev: PS/2 mouse device common for all mice May 10 02:14:45.571465 systemd-networkd[1074]: eth0: DHCPv4 address 10.230.33.70/30, gateway 10.230.33.69 acquired from 10.230.33.69 May 10 02:14:45.616000 audit[1073]: AVC avc: denied { confidentiality } for pid=1073 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 10 02:14:45.637403 kernel: audit: type=1400 audit(1746843285.616:120): avc: denied { confidentiality } for pid=1073 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 10 02:14:45.616000 audit[1073]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=562caf36dfe0 a1=338ac a2=7f83f203dbc5 a3=5 items=110 ppid=1069 pid=1073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:14:45.674348 kernel: audit: type=1300 audit(1746843285.616:120): arch=c000003e syscall=175 success=yes exit=0 a0=562caf36dfe0 a1=338ac a2=7f83f203dbc5 a3=5 items=110 ppid=1069 pid=1073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:14:45.683197 kernel: audit: type=1307 audit(1746843285.616:120): cwd="/" May 10 02:14:45.683267 kernel: audit: type=1302 audit(1746843285.616:120): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: CWD cwd="/" May 10 02:14:45.616000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.697726 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 10 02:14:45.709591 kernel: audit: type=1302 audit(1746843285.616:120): item=1 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.709649 kernel: audit: type=1302 audit(1746843285.616:120): item=2 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.709688 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 10 02:14:45.709962 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 10 02:14:45.616000 audit: PATH item=1 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=2 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=3 name=(null) inode=14081 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=4 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=5 name=(null) inode=14082 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=6 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=7 name=(null) inode=14083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=8 name=(null) inode=14083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=9 name=(null) inode=14084 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=10 name=(null) inode=14083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=11 name=(null) inode=14085 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=12 name=(null) inode=14083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=13 name=(null) inode=14086 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=14 name=(null) inode=14083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=15 name=(null) inode=14087 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=16 name=(null) inode=14083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=17 name=(null) inode=14088 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=18 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=19 name=(null) inode=14089 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=20 name=(null) inode=14089 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=21 name=(null) inode=14090 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=22 name=(null) inode=14089 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=23 name=(null) inode=14091 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=24 name=(null) inode=14089 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=25 name=(null) inode=14092 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=26 name=(null) inode=14089 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=27 name=(null) inode=14093 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=28 name=(null) inode=14089 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=29 name=(null) inode=14094 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=30 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=31 name=(null) inode=14095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=32 name=(null) inode=14095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=33 name=(null) inode=14096 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=34 name=(null) inode=14095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=35 name=(null) inode=14097 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=36 name=(null) inode=14095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=37 name=(null) inode=14098 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=38 name=(null) inode=14095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=39 name=(null) inode=14099 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=40 name=(null) inode=14095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=41 name=(null) inode=14100 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=42 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=43 name=(null) inode=14101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=44 name=(null) inode=14101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=45 name=(null) inode=14102 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=46 name=(null) inode=14101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=47 name=(null) inode=14103 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=48 name=(null) inode=14101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=49 name=(null) inode=14104 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=50 name=(null) inode=14101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=51 name=(null) inode=14105 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=52 name=(null) inode=14101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=53 name=(null) inode=14106 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=55 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=56 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=57 name=(null) inode=14108 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=58 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=59 name=(null) inode=14109 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=60 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=61 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=62 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=63 name=(null) inode=14111 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=64 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=65 name=(null) inode=14112 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=66 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=67 name=(null) inode=14113 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=68 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=69 name=(null) inode=14114 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=70 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=71 name=(null) inode=14115 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=72 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=73 name=(null) inode=14116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=74 name=(null) inode=14116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=75 name=(null) inode=14117 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=76 name=(null) inode=14116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=77 name=(null) inode=14118 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=78 name=(null) inode=14116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=79 name=(null) inode=14119 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=80 name=(null) inode=14116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=81 name=(null) inode=14120 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=82 name=(null) inode=14116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=83 name=(null) inode=14121 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=84 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=85 name=(null) inode=14122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=86 name=(null) inode=14122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=87 name=(null) inode=14123 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=88 name=(null) inode=14122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=89 name=(null) inode=14124 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=90 name=(null) inode=14122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=91 name=(null) inode=14125 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=92 name=(null) inode=14122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=93 name=(null) inode=14126 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=94 name=(null) inode=14122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=95 name=(null) inode=14127 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=96 name=(null) inode=14107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=97 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=98 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=99 name=(null) inode=14129 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=100 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=101 name=(null) inode=14130 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=102 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=103 name=(null) inode=14131 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=104 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=105 name=(null) inode=14132 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=106 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=107 name=(null) inode=14133 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PATH item=109 name=(null) inode=14134 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:14:45.616000 audit: PROCTITLE proctitle="(udev-worker)" May 10 02:14:45.732349 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 May 10 02:14:45.847039 systemd[1]: Finished systemd-udev-settle.service. May 10 02:14:45.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:45.849814 systemd[1]: Starting lvm2-activation-early.service... May 10 02:14:45.876604 lvm[1099]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 02:14:45.906499 systemd[1]: Finished lvm2-activation-early.service. May 10 02:14:45.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:45.907432 systemd[1]: Reached target cryptsetup.target. May 10 02:14:45.910021 systemd[1]: Starting lvm2-activation.service... May 10 02:14:45.917006 lvm[1101]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 02:14:45.947599 systemd[1]: Finished lvm2-activation.service. May 10 02:14:45.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:45.948453 systemd[1]: Reached target local-fs-pre.target. May 10 02:14:45.949167 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 02:14:45.949220 systemd[1]: Reached target local-fs.target. May 10 02:14:45.949863 systemd[1]: Reached target machines.target. May 10 02:14:45.952454 systemd[1]: Starting ldconfig.service... May 10 02:14:45.953778 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 02:14:45.953867 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 02:14:45.956255 systemd[1]: Starting systemd-boot-update.service... May 10 02:14:45.959527 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 10 02:14:45.962441 systemd[1]: Starting systemd-machine-id-commit.service... May 10 02:14:45.965585 systemd[1]: Starting systemd-sysext.service... May 10 02:14:45.987051 systemd[1]: Unmounting usr-share-oem.mount... May 10 02:14:45.988174 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1104 (bootctl) May 10 02:14:45.990195 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 10 02:14:45.994244 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 10 02:14:45.994745 systemd[1]: Unmounted usr-share-oem.mount. May 10 02:14:46.111887 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 02:14:46.113787 systemd[1]: Finished systemd-machine-id-commit.service. May 10 02:14:46.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.120352 kernel: loop0: detected capacity change from 0 to 210664 May 10 02:14:46.143087 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 10 02:14:46.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.156786 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 02:14:46.184376 kernel: loop1: detected capacity change from 0 to 210664 May 10 02:14:46.209266 (sd-sysext)[1121]: Using extensions 'kubernetes'. May 10 02:14:46.210638 (sd-sysext)[1121]: Merged extensions into '/usr'. May 10 02:14:46.228622 systemd-fsck[1119]: fsck.fat 4.2 (2021-01-31) May 10 02:14:46.228622 systemd-fsck[1119]: /dev/vda1: 790 files, 120688/258078 clusters May 10 02:14:46.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.231357 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 10 02:14:46.234122 systemd[1]: Mounting boot.mount... May 10 02:14:46.259197 systemd[1]: Mounted boot.mount. May 10 02:14:46.269652 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 02:14:46.271986 systemd[1]: Mounting usr-share-oem.mount... May 10 02:14:46.273358 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 02:14:46.275190 systemd[1]: Starting modprobe@dm_mod.service... May 10 02:14:46.279224 systemd[1]: Starting modprobe@efi_pstore.service... May 10 02:14:46.285700 systemd[1]: Starting modprobe@loop.service... May 10 02:14:46.287966 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 02:14:46.288758 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 02:14:46.288998 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 02:14:46.300898 systemd[1]: Mounted usr-share-oem.mount. May 10 02:14:46.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.307147 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 02:14:46.307415 systemd[1]: Finished modprobe@dm_mod.service. May 10 02:14:46.308538 systemd[1]: Finished systemd-sysext.service. May 10 02:14:46.309563 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 02:14:46.309774 systemd[1]: Finished modprobe@efi_pstore.service. May 10 02:14:46.310877 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 02:14:46.311110 systemd[1]: Finished modprobe@loop.service. May 10 02:14:46.317025 systemd[1]: Starting ensure-sysext.service... May 10 02:14:46.318407 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 02:14:46.318557 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 02:14:46.321167 systemd[1]: Starting systemd-tmpfiles-setup.service... May 10 02:14:46.333038 systemd[1]: Reloading. May 10 02:14:46.358074 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 10 02:14:46.364174 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 02:14:46.372829 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 02:14:46.463724 /usr/lib/systemd/system-generators/torcx-generator[1159]: time="2025-05-10T02:14:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 02:14:46.469107 /usr/lib/systemd/system-generators/torcx-generator[1159]: time="2025-05-10T02:14:46Z" level=info msg="torcx already run" May 10 02:14:46.537289 ldconfig[1103]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 02:14:46.608205 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 02:14:46.608756 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 02:14:46.637602 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 02:14:46.746820 systemd[1]: Finished ldconfig.service. May 10 02:14:46.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.748117 systemd[1]: Finished systemd-boot-update.service. May 10 02:14:46.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.750576 systemd[1]: Finished systemd-tmpfiles-setup.service. May 10 02:14:46.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.755416 systemd[1]: Starting audit-rules.service... May 10 02:14:46.758005 systemd[1]: Starting clean-ca-certificates.service... May 10 02:14:46.761049 systemd[1]: Starting systemd-journal-catalog-update.service... May 10 02:14:46.764571 systemd[1]: Starting systemd-resolved.service... May 10 02:14:46.768222 systemd[1]: Starting systemd-timesyncd.service... May 10 02:14:46.775836 systemd[1]: Starting systemd-update-utmp.service... May 10 02:14:46.777686 systemd[1]: Finished clean-ca-certificates.service. May 10 02:14:46.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.795065 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 02:14:46.798076 systemd[1]: Starting modprobe@dm_mod.service... May 10 02:14:46.801202 systemd[1]: Starting modprobe@efi_pstore.service... May 10 02:14:46.804248 systemd[1]: Starting modprobe@loop.service... May 10 02:14:46.804000 audit[1222]: SYSTEM_BOOT pid=1222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 10 02:14:46.806825 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 02:14:46.807068 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 02:14:46.807292 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 02:14:46.813231 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 02:14:46.815410 systemd[1]: Finished modprobe@dm_mod.service. May 10 02:14:46.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.817266 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 02:14:46.817532 systemd[1]: Finished modprobe@efi_pstore.service. May 10 02:14:46.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.818900 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 02:14:46.819160 systemd[1]: Finished modprobe@loop.service. May 10 02:14:46.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.822884 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 02:14:46.823116 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 02:14:46.829807 systemd[1]: Finished systemd-update-utmp.service. May 10 02:14:46.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.835154 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 02:14:46.837089 systemd[1]: Starting modprobe@dm_mod.service... May 10 02:14:46.839724 systemd[1]: Starting modprobe@efi_pstore.service... May 10 02:14:46.842218 systemd[1]: Starting modprobe@loop.service... May 10 02:14:46.843040 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 02:14:46.843277 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 02:14:46.844393 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 02:14:46.846091 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 02:14:46.846422 systemd[1]: Finished modprobe@efi_pstore.service. May 10 02:14:46.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.848640 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 02:14:46.850776 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 02:14:46.851018 systemd[1]: Finished modprobe@loop.service. May 10 02:14:46.857040 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 02:14:46.857288 systemd[1]: Finished modprobe@dm_mod.service. May 10 02:14:46.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.858874 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 02:14:46.861808 systemd[1]: Starting modprobe@drm.service... May 10 02:14:46.867463 systemd[1]: Starting modprobe@efi_pstore.service... May 10 02:14:46.873671 systemd[1]: Starting modprobe@loop.service... May 10 02:14:46.874552 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 02:14:46.874772 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 02:14:46.877540 systemd[1]: Starting systemd-networkd-wait-online.service... May 10 02:14:46.879471 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 02:14:46.885284 systemd[1]: Finished ensure-sysext.service. May 10 02:14:46.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.901250 systemd[1]: Finished systemd-journal-catalog-update.service. May 10 02:14:46.902492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 02:14:46.902749 systemd[1]: Finished modprobe@efi_pstore.service. May 10 02:14:46.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.903914 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 02:14:46.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.904136 systemd[1]: Finished modprobe@loop.service. May 10 02:14:46.905003 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 02:14:46.905070 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 02:14:46.907289 systemd[1]: Starting systemd-update-done.service... May 10 02:14:46.909624 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 02:14:46.909941 systemd[1]: Finished modprobe@drm.service. May 10 02:14:46.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.937624 systemd[1]: Finished systemd-update-done.service. May 10 02:14:46.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:14:46.968000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 10 02:14:46.968000 audit[1265]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffede12e210 a2=420 a3=0 items=0 ppid=1215 pid=1265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:14:46.968000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 10 02:14:46.968810 augenrules[1265]: No rules May 10 02:14:46.970052 systemd[1]: Finished audit-rules.service. May 10 02:14:46.994894 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 02:14:46.994938 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 02:14:46.998252 systemd[1]: Started systemd-timesyncd.service. May 10 02:14:47.015893 systemd[1]: Reached target time-set.target. May 10 02:14:47.020024 systemd-resolved[1218]: Positive Trust Anchors: May 10 02:14:47.020540 systemd-resolved[1218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 02:14:47.020709 systemd-resolved[1218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 02:14:47.028769 systemd-resolved[1218]: Using system hostname 'srv-it8yl.gb1.brightbox.com'. May 10 02:14:47.031859 systemd[1]: Started systemd-resolved.service. May 10 02:14:47.032707 systemd[1]: Reached target network.target. May 10 02:14:47.033399 systemd[1]: Reached target nss-lookup.target. May 10 02:14:47.034133 systemd[1]: Reached target sysinit.target. May 10 02:14:47.034945 systemd[1]: Started motdgen.path. May 10 02:14:47.035606 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 10 02:14:47.036659 systemd[1]: Started logrotate.timer. May 10 02:14:47.037448 systemd[1]: Started mdadm.timer. May 10 02:14:47.038043 systemd[1]: Started systemd-tmpfiles-clean.timer. May 10 02:14:47.038759 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 02:14:47.038854 systemd[1]: Reached target paths.target. May 10 02:14:47.039480 systemd[1]: Reached target timers.target. May 10 02:14:47.040783 systemd[1]: Listening on dbus.socket. May 10 02:14:47.043346 systemd[1]: Starting docker.socket... May 10 02:14:47.046192 systemd[1]: Listening on sshd.socket. May 10 02:14:47.047067 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 02:14:47.047660 systemd[1]: Listening on docker.socket. May 10 02:14:47.048429 systemd[1]: Reached target sockets.target. May 10 02:14:47.049327 systemd[1]: Reached target basic.target. May 10 02:14:47.050360 systemd[1]: System is tainted: cgroupsv1 May 10 02:14:47.050597 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 02:14:47.050809 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 02:14:47.052596 systemd[1]: Starting containerd.service... May 10 02:14:47.055651 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 10 02:14:47.058203 systemd[1]: Starting dbus.service... May 10 02:14:47.060656 systemd[1]: Starting enable-oem-cloudinit.service... May 10 02:14:47.063394 systemd[1]: Starting extend-filesystems.service... May 10 02:14:47.064267 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 10 02:14:47.072068 systemd[1]: Starting motdgen.service... May 10 02:14:47.076660 systemd[1]: Starting prepare-helm.service... May 10 02:14:47.082337 systemd[1]: Starting ssh-key-proc-cmdline.service... May 10 02:14:47.086101 systemd[1]: Starting sshd-keygen.service... May 10 02:14:47.093290 systemd[1]: Starting systemd-logind.service... May 10 02:14:47.096422 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 02:14:47.096564 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 02:14:47.098532 systemd[1]: Starting update-engine.service... May 10 02:14:47.104059 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 10 02:14:47.129008 jq[1292]: true May 10 02:14:47.131213 jq[1280]: false May 10 02:14:47.136645 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 02:14:47.137039 systemd[1]: Finished ssh-key-proc-cmdline.service. May 10 02:14:47.140477 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 02:14:47.140834 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 10 02:14:47.163556 tar[1299]: linux-amd64/helm May 10 02:14:47.172614 jq[1306]: true May 10 02:14:47.202138 dbus-daemon[1277]: [system] SELinux support is enabled May 10 02:14:47.202431 systemd[1]: Started dbus.service. May 10 02:14:47.205551 extend-filesystems[1281]: Found loop1 May 10 02:14:47.206084 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 02:14:47.206144 systemd[1]: Reached target system-config.target. May 10 02:14:47.207342 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 02:14:47.207391 systemd[1]: Reached target user-config.target. May 10 02:14:47.218718 systemd[1]: motdgen.service: Deactivated successfully. May 10 02:14:47.218989 extend-filesystems[1281]: Found vda May 10 02:14:47.219101 systemd[1]: Finished motdgen.service. May 10 02:14:47.222419 dbus-daemon[1277]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1074 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 10 02:14:47.227045 systemd[1]: Starting systemd-hostnamed.service... May 10 02:14:47.227222 extend-filesystems[1281]: Found vda1 May 10 02:14:47.228654 extend-filesystems[1281]: Found vda2 May 10 02:14:47.228654 extend-filesystems[1281]: Found vda3 May 10 02:14:47.228654 extend-filesystems[1281]: Found usr May 10 02:14:47.228654 extend-filesystems[1281]: Found vda4 May 10 02:14:47.228654 extend-filesystems[1281]: Found vda6 May 10 02:14:47.228654 extend-filesystems[1281]: Found vda7 May 10 02:14:47.228654 extend-filesystems[1281]: Found vda9 May 10 02:14:47.228654 extend-filesystems[1281]: Checking size of /dev/vda9 May 10 02:14:47.264816 extend-filesystems[1281]: Resized partition /dev/vda9 May 10 02:14:47.288175 update_engine[1289]: I0510 02:14:47.287483 1289 main.cc:92] Flatcar Update Engine starting May 10 02:14:47.292753 systemd[1]: Started update-engine.service. May 10 02:14:47.293254 extend-filesystems[1337]: resize2fs 1.46.5 (30-Dec-2021) May 10 02:14:47.297243 systemd[1]: Started locksmithd.service. May 10 02:14:47.302331 update_engine[1289]: I0510 02:14:47.299964 1289 update_check_scheduler.cc:74] Next update check in 10m48s May 10 02:14:47.303242 systemd[1]: Created slice system-sshd.slice. May 10 02:14:47.322329 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks May 10 02:14:47.376436 env[1300]: time="2025-05-10T02:14:47.375742736Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 10 02:14:47.382330 bash[1338]: Updated "/home/core/.ssh/authorized_keys" May 10 02:14:47.382805 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 10 02:14:48.378484 systemd-resolved[1218]: Clock change detected. Flushing caches. May 10 02:14:48.378869 systemd-timesyncd[1219]: Contacted time server 85.199.214.100:123 (0.flatcar.pool.ntp.org). May 10 02:14:48.379175 systemd-timesyncd[1219]: Initial clock synchronization to Sat 2025-05-10 02:14:48.378187 UTC. May 10 02:14:48.457451 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 10 02:14:48.474691 extend-filesystems[1337]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 10 02:14:48.474691 extend-filesystems[1337]: old_desc_blocks = 1, new_desc_blocks = 8 May 10 02:14:48.474691 extend-filesystems[1337]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 10 02:14:48.485028 extend-filesystems[1281]: Resized filesystem in /dev/vda9 May 10 02:14:48.475018 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 02:14:48.475402 systemd[1]: Finished extend-filesystems.service. May 10 02:14:48.493916 systemd-logind[1288]: Watching system buttons on /dev/input/event2 (Power Button) May 10 02:14:48.493959 systemd-logind[1288]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 10 02:14:48.497349 systemd-logind[1288]: New seat seat0. May 10 02:14:48.508922 systemd[1]: Started systemd-logind.service. May 10 02:14:48.512392 env[1300]: time="2025-05-10T02:14:48.512227237Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 02:14:48.512465 env[1300]: time="2025-05-10T02:14:48.512415888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 02:14:48.514490 env[1300]: time="2025-05-10T02:14:48.514429604Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 02:14:48.514577 env[1300]: time="2025-05-10T02:14:48.514488409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 02:14:48.514879 env[1300]: time="2025-05-10T02:14:48.514830449Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 02:14:48.514976 env[1300]: time="2025-05-10T02:14:48.514881068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 02:14:48.514976 env[1300]: time="2025-05-10T02:14:48.514904877Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 10 02:14:48.514976 env[1300]: time="2025-05-10T02:14:48.514921616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 02:14:48.515101 env[1300]: time="2025-05-10T02:14:48.515049644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 02:14:48.515496 env[1300]: time="2025-05-10T02:14:48.515465097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 02:14:48.515761 env[1300]: time="2025-05-10T02:14:48.515720361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 02:14:48.515761 env[1300]: time="2025-05-10T02:14:48.515755569Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 02:14:48.515862 env[1300]: time="2025-05-10T02:14:48.515840275Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 10 02:14:48.515946 env[1300]: time="2025-05-10T02:14:48.515862487Z" level=info msg="metadata content store policy set" policy=shared May 10 02:14:48.523749 env[1300]: time="2025-05-10T02:14:48.523706219Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 02:14:48.523849 env[1300]: time="2025-05-10T02:14:48.523791684Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 02:14:48.523849 env[1300]: time="2025-05-10T02:14:48.523839674Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 02:14:48.523967 env[1300]: time="2025-05-10T02:14:48.523925237Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 02:14:48.524049 env[1300]: time="2025-05-10T02:14:48.524012750Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 02:14:48.524129 env[1300]: time="2025-05-10T02:14:48.524063130Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 02:14:48.524129 env[1300]: time="2025-05-10T02:14:48.524086411Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 02:14:48.524129 env[1300]: time="2025-05-10T02:14:48.524107592Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 02:14:48.524298 env[1300]: time="2025-05-10T02:14:48.524164670Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 10 02:14:48.524298 env[1300]: time="2025-05-10T02:14:48.524190552Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 02:14:48.524298 env[1300]: time="2025-05-10T02:14:48.524232694Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 02:14:48.524298 env[1300]: time="2025-05-10T02:14:48.524260646Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 02:14:48.524520 env[1300]: time="2025-05-10T02:14:48.524471255Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 02:14:48.524770 env[1300]: time="2025-05-10T02:14:48.524739018Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 02:14:48.525999 env[1300]: time="2025-05-10T02:14:48.525945220Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 02:14:48.526070 env[1300]: time="2025-05-10T02:14:48.526022172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 02:14:48.526129 env[1300]: time="2025-05-10T02:14:48.526066658Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 02:14:48.526309 env[1300]: time="2025-05-10T02:14:48.526188305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 02:14:48.526309 env[1300]: time="2025-05-10T02:14:48.526239070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 02:14:48.526309 env[1300]: time="2025-05-10T02:14:48.526275008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 02:14:48.526519 env[1300]: time="2025-05-10T02:14:48.526319422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 02:14:48.526519 env[1300]: time="2025-05-10T02:14:48.526341987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 02:14:48.526519 env[1300]: time="2025-05-10T02:14:48.526391130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 02:14:48.526519 env[1300]: time="2025-05-10T02:14:48.526417528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 02:14:48.526519 env[1300]: time="2025-05-10T02:14:48.526451525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 02:14:48.526519 env[1300]: time="2025-05-10T02:14:48.526491505Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 02:14:48.526867 env[1300]: time="2025-05-10T02:14:48.526834307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 02:14:48.526930 env[1300]: time="2025-05-10T02:14:48.526867780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 02:14:48.526930 env[1300]: time="2025-05-10T02:14:48.526908266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 02:14:48.527054 env[1300]: time="2025-05-10T02:14:48.526927389Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 02:14:48.527054 env[1300]: time="2025-05-10T02:14:48.526974026Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 10 02:14:48.527054 env[1300]: time="2025-05-10T02:14:48.526995984Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 02:14:48.527192 env[1300]: time="2025-05-10T02:14:48.527074629Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 10 02:14:48.527192 env[1300]: time="2025-05-10T02:14:48.527176961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 02:14:48.528964 env[1300]: time="2025-05-10T02:14:48.527934758Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 02:14:48.531050 env[1300]: time="2025-05-10T02:14:48.529434898Z" level=info msg="Connect containerd service" May 10 02:14:48.531050 env[1300]: time="2025-05-10T02:14:48.530062118Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 02:14:48.534456 env[1300]: time="2025-05-10T02:14:48.534404443Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 02:14:48.534811 env[1300]: time="2025-05-10T02:14:48.534761212Z" level=info msg="Start subscribing containerd event" May 10 02:14:48.534891 env[1300]: time="2025-05-10T02:14:48.534856686Z" level=info msg="Start recovering state" May 10 02:14:48.535060 env[1300]: time="2025-05-10T02:14:48.535030496Z" level=info msg="Start event monitor" May 10 02:14:48.535259 env[1300]: time="2025-05-10T02:14:48.535226726Z" level=info msg="Start snapshots syncer" May 10 02:14:48.535326 env[1300]: time="2025-05-10T02:14:48.535274587Z" level=info msg="Start cni network conf syncer for default" May 10 02:14:48.535326 env[1300]: time="2025-05-10T02:14:48.535291045Z" level=info msg="Start streaming server" May 10 02:14:48.538921 env[1300]: time="2025-05-10T02:14:48.538887306Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 02:14:48.539150 env[1300]: time="2025-05-10T02:14:48.539097197Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 02:14:48.543096 systemd-networkd[1074]: eth0: Gained IPv6LL May 10 02:14:48.547878 systemd[1]: Finished systemd-networkd-wait-online.service. May 10 02:14:48.549127 systemd[1]: Reached target network-online.target. May 10 02:14:48.552472 systemd[1]: Starting kubelet.service... May 10 02:14:48.578805 env[1300]: time="2025-05-10T02:14:48.578751831Z" level=info msg="containerd successfully booted in 0.221952s" May 10 02:14:48.578859 systemd[1]: Started containerd.service. May 10 02:14:48.583087 dbus-daemon[1277]: [system] Successfully activated service 'org.freedesktop.hostname1' May 10 02:14:48.583525 systemd[1]: Started systemd-hostnamed.service. May 10 02:14:48.583873 dbus-daemon[1277]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1323 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 10 02:14:48.588392 systemd[1]: Starting polkit.service... May 10 02:14:48.606737 polkitd[1353]: Started polkitd version 121 May 10 02:14:48.625450 polkitd[1353]: Loading rules from directory /etc/polkit-1/rules.d May 10 02:14:48.625747 polkitd[1353]: Loading rules from directory /usr/share/polkit-1/rules.d May 10 02:14:48.629868 polkitd[1353]: Finished loading, compiling and executing 2 rules May 10 02:14:48.633556 dbus-daemon[1277]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 10 02:14:48.633808 systemd[1]: Started polkit.service. May 10 02:14:48.634975 polkitd[1353]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 10 02:14:48.652521 systemd-hostnamed[1323]: Hostname set to (static) May 10 02:14:49.053315 systemd-networkd[1074]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8851:24:19ff:fee6:2146/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8851:24:19ff:fee6:2146/64 assigned by NDisc. May 10 02:14:49.053328 systemd-networkd[1074]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. May 10 02:14:49.186297 tar[1299]: linux-amd64/LICENSE May 10 02:14:49.186941 tar[1299]: linux-amd64/README.md May 10 02:14:49.193426 systemd[1]: Finished prepare-helm.service. May 10 02:14:49.221126 locksmithd[1339]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 02:14:49.713761 systemd[1]: Started kubelet.service. May 10 02:14:50.482357 kubelet[1373]: E0510 02:14:50.482299 1373 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 02:14:50.484757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 02:14:50.485043 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 02:14:50.887383 sshd_keygen[1311]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 02:14:50.913734 systemd[1]: Finished sshd-keygen.service. May 10 02:14:50.923310 systemd[1]: Starting issuegen.service... May 10 02:14:50.927063 systemd[1]: Started sshd@0-10.230.33.70:22-139.178.68.195:48954.service. May 10 02:14:50.935727 systemd[1]: issuegen.service: Deactivated successfully. May 10 02:14:50.936106 systemd[1]: Finished issuegen.service. May 10 02:14:50.939242 systemd[1]: Starting systemd-user-sessions.service... May 10 02:14:50.951294 systemd[1]: Finished systemd-user-sessions.service. May 10 02:14:50.954218 systemd[1]: Started getty@tty1.service. May 10 02:14:50.958563 systemd[1]: Started serial-getty@ttyS0.service. May 10 02:14:50.960017 systemd[1]: Reached target getty.target. May 10 02:14:51.846042 sshd[1390]: Accepted publickey for core from 139.178.68.195 port 48954 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:14:51.849279 sshd[1390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:14:51.866381 systemd[1]: Created slice user-500.slice. May 10 02:14:51.869189 systemd[1]: Starting user-runtime-dir@500.service... May 10 02:14:51.875748 systemd-logind[1288]: New session 1 of user core. May 10 02:14:51.886985 systemd[1]: Finished user-runtime-dir@500.service. May 10 02:14:51.891220 systemd[1]: Starting user@500.service... May 10 02:14:51.897818 (systemd)[1403]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 02:14:52.011767 systemd[1403]: Queued start job for default target default.target. May 10 02:14:52.012114 systemd[1403]: Reached target paths.target. May 10 02:14:52.012141 systemd[1403]: Reached target sockets.target. May 10 02:14:52.012162 systemd[1403]: Reached target timers.target. May 10 02:14:52.012182 systemd[1403]: Reached target basic.target. May 10 02:14:52.012346 systemd[1]: Started user@500.service. May 10 02:14:52.016539 systemd[1403]: Reached target default.target. May 10 02:14:52.016622 systemd[1403]: Startup finished in 110ms. May 10 02:14:52.017167 systemd[1]: Started session-1.scope. May 10 02:14:52.647547 systemd[1]: Started sshd@1-10.230.33.70:22-139.178.68.195:48960.service. May 10 02:14:53.544970 sshd[1412]: Accepted publickey for core from 139.178.68.195 port 48960 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:14:53.548636 sshd[1412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:14:53.556434 systemd[1]: Started session-2.scope. May 10 02:14:53.558889 systemd-logind[1288]: New session 2 of user core. May 10 02:14:54.168287 sshd[1412]: pam_unix(sshd:session): session closed for user core May 10 02:14:54.172303 systemd[1]: sshd@1-10.230.33.70:22-139.178.68.195:48960.service: Deactivated successfully. May 10 02:14:54.173474 systemd[1]: session-2.scope: Deactivated successfully. May 10 02:14:54.174446 systemd-logind[1288]: Session 2 logged out. Waiting for processes to exit. May 10 02:14:54.175527 systemd-logind[1288]: Removed session 2. May 10 02:14:54.315931 systemd[1]: Started sshd@2-10.230.33.70:22-139.178.68.195:48972.service. May 10 02:14:55.209850 sshd[1419]: Accepted publickey for core from 139.178.68.195 port 48972 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:14:55.211888 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:14:55.218704 systemd-logind[1288]: New session 3 of user core. May 10 02:14:55.219929 systemd[1]: Started session-3.scope. May 10 02:14:55.244763 coreos-metadata[1276]: May 10 02:14:55.244 WARN failed to locate config-drive, using the metadata service API instead May 10 02:14:55.299650 coreos-metadata[1276]: May 10 02:14:55.299 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 10 02:14:55.332610 coreos-metadata[1276]: May 10 02:14:55.332 INFO Fetch successful May 10 02:14:55.333121 coreos-metadata[1276]: May 10 02:14:55.332 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 10 02:14:55.360305 coreos-metadata[1276]: May 10 02:14:55.360 INFO Fetch successful May 10 02:14:55.362506 unknown[1276]: wrote ssh authorized keys file for user: core May 10 02:14:55.375643 update-ssh-keys[1425]: Updated "/home/core/.ssh/authorized_keys" May 10 02:14:55.376235 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 10 02:14:55.377486 systemd[1]: Reached target multi-user.target. May 10 02:14:55.380441 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 10 02:14:55.394464 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 10 02:14:55.394853 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 10 02:14:55.405272 systemd[1]: Startup finished in 9.007s (kernel) + 13.889s (userspace) = 22.896s. May 10 02:14:55.831606 sshd[1419]: pam_unix(sshd:session): session closed for user core May 10 02:14:55.835777 systemd-logind[1288]: Session 3 logged out. Waiting for processes to exit. May 10 02:14:55.837006 systemd[1]: sshd@2-10.230.33.70:22-139.178.68.195:48972.service: Deactivated successfully. May 10 02:14:55.838174 systemd[1]: session-3.scope: Deactivated successfully. May 10 02:14:55.839814 systemd-logind[1288]: Removed session 3. May 10 02:15:00.545532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 02:15:00.545926 systemd[1]: Stopped kubelet.service. May 10 02:15:00.548455 systemd[1]: Starting kubelet.service... May 10 02:15:00.714335 systemd[1]: Started kubelet.service. May 10 02:15:00.813305 kubelet[1442]: E0510 02:15:00.812866 1442 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 02:15:00.817873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 02:15:00.818144 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 02:15:05.978580 systemd[1]: Started sshd@3-10.230.33.70:22-139.178.68.195:54844.service. May 10 02:15:06.864673 sshd[1450]: Accepted publickey for core from 139.178.68.195 port 54844 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:15:06.867300 sshd[1450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:15:06.875091 systemd[1]: Started session-4.scope. May 10 02:15:06.875551 systemd-logind[1288]: New session 4 of user core. May 10 02:15:07.482318 sshd[1450]: pam_unix(sshd:session): session closed for user core May 10 02:15:07.486372 systemd-logind[1288]: Session 4 logged out. Waiting for processes to exit. May 10 02:15:07.486855 systemd[1]: sshd@3-10.230.33.70:22-139.178.68.195:54844.service: Deactivated successfully. May 10 02:15:07.487914 systemd[1]: session-4.scope: Deactivated successfully. May 10 02:15:07.489062 systemd-logind[1288]: Removed session 4. May 10 02:15:07.634458 systemd[1]: Started sshd@4-10.230.33.70:22-139.178.68.195:54852.service. May 10 02:15:08.550722 sshd[1457]: Accepted publickey for core from 139.178.68.195 port 54852 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:15:08.552747 sshd[1457]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:15:08.561051 systemd[1]: Started session-5.scope. May 10 02:15:08.561524 systemd-logind[1288]: New session 5 of user core. May 10 02:15:09.168832 sshd[1457]: pam_unix(sshd:session): session closed for user core May 10 02:15:09.172476 systemd[1]: sshd@4-10.230.33.70:22-139.178.68.195:54852.service: Deactivated successfully. May 10 02:15:09.173570 systemd[1]: session-5.scope: Deactivated successfully. May 10 02:15:09.174670 systemd-logind[1288]: Session 5 logged out. Waiting for processes to exit. May 10 02:15:09.175881 systemd-logind[1288]: Removed session 5. May 10 02:15:09.312660 systemd[1]: Started sshd@5-10.230.33.70:22-139.178.68.195:54862.service. May 10 02:15:10.200379 sshd[1464]: Accepted publickey for core from 139.178.68.195 port 54862 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:15:10.203487 sshd[1464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:15:10.211144 systemd-logind[1288]: New session 6 of user core. May 10 02:15:10.212204 systemd[1]: Started session-6.scope. May 10 02:15:10.822903 sshd[1464]: pam_unix(sshd:session): session closed for user core May 10 02:15:10.827259 systemd[1]: sshd@5-10.230.33.70:22-139.178.68.195:54862.service: Deactivated successfully. May 10 02:15:10.829054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 02:15:10.829289 systemd[1]: Stopped kubelet.service. May 10 02:15:10.832060 systemd[1]: Starting kubelet.service... May 10 02:15:10.832544 systemd[1]: session-6.scope: Deactivated successfully. May 10 02:15:10.834967 systemd-logind[1288]: Session 6 logged out. Waiting for processes to exit. May 10 02:15:10.843936 systemd-logind[1288]: Removed session 6. May 10 02:15:10.968963 systemd[1]: Started sshd@6-10.230.33.70:22-139.178.68.195:54868.service. May 10 02:15:10.974088 systemd[1]: Started kubelet.service. May 10 02:15:11.060469 kubelet[1480]: E0510 02:15:11.060343 1480 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 02:15:11.063650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 02:15:11.063950 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 02:15:12.329197 sshd[1478]: Accepted publickey for core from 139.178.68.195 port 54868 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:15:12.331441 sshd[1478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:15:12.338979 systemd-logind[1288]: New session 7 of user core. May 10 02:15:12.339855 systemd[1]: Started session-7.scope. May 10 02:15:12.833994 sudo[1490]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 10 02:15:12.834378 sudo[1490]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 10 02:15:12.843012 dbus-daemon[1277]: \xd0M\xde<\xa4U: received setenforce notice (enforcing=-147815104) May 10 02:15:12.845756 sudo[1490]: pam_unix(sudo:session): session closed for user root May 10 02:15:12.990378 sshd[1478]: pam_unix(sshd:session): session closed for user core May 10 02:15:12.994557 systemd-logind[1288]: Session 7 logged out. Waiting for processes to exit. May 10 02:15:12.994979 systemd[1]: sshd@6-10.230.33.70:22-139.178.68.195:54868.service: Deactivated successfully. May 10 02:15:12.996129 systemd[1]: session-7.scope: Deactivated successfully. May 10 02:15:12.996975 systemd-logind[1288]: Removed session 7. May 10 02:15:13.139208 systemd[1]: Started sshd@7-10.230.33.70:22-139.178.68.195:54880.service. May 10 02:15:14.042286 sshd[1494]: Accepted publickey for core from 139.178.68.195 port 54880 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:15:14.044472 sshd[1494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:15:14.051700 systemd-logind[1288]: New session 8 of user core. May 10 02:15:14.052621 systemd[1]: Started session-8.scope. May 10 02:15:14.524218 sudo[1499]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 10 02:15:14.524713 sudo[1499]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 10 02:15:14.529344 sudo[1499]: pam_unix(sudo:session): session closed for user root May 10 02:15:14.536262 sudo[1498]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 10 02:15:14.537042 sudo[1498]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 10 02:15:14.551204 systemd[1]: Stopping audit-rules.service... May 10 02:15:14.552000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 10 02:15:14.557237 kernel: kauditd_printk_skb: 151 callbacks suppressed May 10 02:15:14.557365 kernel: audit: type=1305 audit(1746843314.552:162): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 10 02:15:14.552000 audit[1502]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc307f2900 a2=420 a3=0 items=0 ppid=1 pid=1502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:14.567503 kernel: audit: type=1300 audit(1746843314.552:162): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc307f2900 a2=420 a3=0 items=0 ppid=1 pid=1502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:14.567708 kernel: audit: type=1327 audit(1746843314.552:162): proctitle=2F7362696E2F617564697463746C002D44 May 10 02:15:14.552000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 May 10 02:15:14.567863 auditctl[1502]: No rules May 10 02:15:14.574847 kernel: audit: type=1131 audit(1746843314.569:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:14.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:14.569240 systemd[1]: audit-rules.service: Deactivated successfully. May 10 02:15:14.569775 systemd[1]: Stopped audit-rules.service. May 10 02:15:14.573528 systemd[1]: Starting audit-rules.service... May 10 02:15:14.606080 augenrules[1520]: No rules May 10 02:15:14.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:14.609154 sudo[1498]: pam_unix(sudo:session): session closed for user root May 10 02:15:14.607671 systemd[1]: Finished audit-rules.service. May 10 02:15:14.613669 kernel: audit: type=1130 audit(1746843314.607:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:14.608000 audit[1498]: USER_END pid=1498 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 10 02:15:14.608000 audit[1498]: CRED_DISP pid=1498 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 10 02:15:14.625802 kernel: audit: type=1106 audit(1746843314.608:165): pid=1498 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 10 02:15:14.625912 kernel: audit: type=1104 audit(1746843314.608:166): pid=1498 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 10 02:15:14.758265 sshd[1494]: pam_unix(sshd:session): session closed for user core May 10 02:15:14.759000 audit[1494]: USER_END pid=1494 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:15:14.763262 systemd[1]: sshd@7-10.230.33.70:22-139.178.68.195:54880.service: Deactivated successfully. May 10 02:15:14.764457 systemd[1]: session-8.scope: Deactivated successfully. May 10 02:15:14.765530 systemd-logind[1288]: Session 8 logged out. Waiting for processes to exit. May 10 02:15:14.759000 audit[1494]: CRED_DISP pid=1494 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:15:14.767624 systemd-logind[1288]: Removed session 8. May 10 02:15:14.774122 kernel: audit: type=1106 audit(1746843314.759:167): pid=1494 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:15:14.774216 kernel: audit: type=1104 audit(1746843314.759:168): pid=1494 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:15:14.774266 kernel: audit: type=1131 audit(1746843314.759:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.33.70:22-139.178.68.195:54880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:14.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.33.70:22-139.178.68.195:54880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:14.902199 systemd[1]: Started sshd@8-10.230.33.70:22-139.178.68.195:54884.service. May 10 02:15:14.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.33.70:22-139.178.68.195:54884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:15.786000 audit[1527]: USER_ACCT pid=1527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:15:15.787684 sshd[1527]: Accepted publickey for core from 139.178.68.195 port 54884 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:15:15.788000 audit[1527]: CRED_ACQ pid=1527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:15:15.788000 audit[1527]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd95c43b20 a2=3 a3=0 items=0 ppid=1 pid=1527 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:15.788000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:15:15.790335 sshd[1527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:15:15.798328 systemd-logind[1288]: New session 9 of user core. May 10 02:15:15.799203 systemd[1]: Started session-9.scope. May 10 02:15:15.806000 audit[1527]: USER_START pid=1527 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:15:15.808000 audit[1530]: CRED_ACQ pid=1530 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:15:16.262000 audit[1531]: USER_ACCT pid=1531 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 10 02:15:16.263498 sudo[1531]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 02:15:16.263000 audit[1531]: CRED_REFR pid=1531 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 10 02:15:16.264481 sudo[1531]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 10 02:15:16.266000 audit[1531]: USER_START pid=1531 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 10 02:15:16.306991 systemd[1]: Starting docker.service... May 10 02:15:16.364617 env[1541]: time="2025-05-10T02:15:16.364525943Z" level=info msg="Starting up" May 10 02:15:16.368358 env[1541]: time="2025-05-10T02:15:16.368236800Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 02:15:16.368358 env[1541]: time="2025-05-10T02:15:16.368283665Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 02:15:16.368358 env[1541]: time="2025-05-10T02:15:16.368335199Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 02:15:16.368358 env[1541]: time="2025-05-10T02:15:16.368357960Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 02:15:16.372086 env[1541]: time="2025-05-10T02:15:16.372009915Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 02:15:16.372086 env[1541]: time="2025-05-10T02:15:16.372040073Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 02:15:16.372086 env[1541]: time="2025-05-10T02:15:16.372059583Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 02:15:16.372086 env[1541]: time="2025-05-10T02:15:16.372073850Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 02:15:16.381770 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1963834241-merged.mount: Deactivated successfully. May 10 02:15:16.532178 env[1541]: time="2025-05-10T02:15:16.531342778Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 10 02:15:16.532476 env[1541]: time="2025-05-10T02:15:16.532421712Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 10 02:15:16.532950 env[1541]: time="2025-05-10T02:15:16.532922570Z" level=info msg="Loading containers: start." May 10 02:15:16.620000 audit[1574]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1574 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.620000 audit[1574]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff300b84b0 a2=0 a3=7fff300b849c items=0 ppid=1541 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.620000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 10 02:15:16.624000 audit[1576]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.624000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc30bd0aa0 a2=0 a3=7ffc30bd0a8c items=0 ppid=1541 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.624000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 May 10 02:15:16.627000 audit[1578]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.627000 audit[1578]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffff0c9b7c0 a2=0 a3=7ffff0c9b7ac items=0 ppid=1541 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.627000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 10 02:15:16.630000 audit[1580]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.630000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdb65df370 a2=0 a3=7ffdb65df35c items=0 ppid=1541 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.630000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 10 02:15:16.634000 audit[1582]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.634000 audit[1582]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe727d6b30 a2=0 a3=7ffe727d6b1c items=0 ppid=1541 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.634000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E May 10 02:15:16.657000 audit[1587]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1587 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.657000 audit[1587]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc53451a00 a2=0 a3=7ffc534519ec items=0 ppid=1541 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.657000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E May 10 02:15:16.667000 audit[1589]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1589 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.667000 audit[1589]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffef131b0a0 a2=0 a3=7ffef131b08c items=0 ppid=1541 pid=1589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.667000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 May 10 02:15:16.671000 audit[1591]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1591 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.671000 audit[1591]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe90623fd0 a2=0 a3=7ffe90623fbc items=0 ppid=1541 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.671000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E May 10 02:15:16.674000 audit[1593]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1593 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.674000 audit[1593]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7fff473d5fd0 a2=0 a3=7fff473d5fbc items=0 ppid=1541 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.674000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 10 02:15:16.684000 audit[1597]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1597 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.684000 audit[1597]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffcbbdcf570 a2=0 a3=7ffcbbdcf55c items=0 ppid=1541 pid=1597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.684000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 10 02:15:16.690000 audit[1598]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1598 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.690000 audit[1598]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe31c02ed0 a2=0 a3=7ffe31c02ebc items=0 ppid=1541 pid=1598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.690000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 10 02:15:16.706905 kernel: Initializing XFRM netlink socket May 10 02:15:16.763308 env[1541]: time="2025-05-10T02:15:16.763108879Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 10 02:15:16.805000 audit[1606]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1606 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.805000 audit[1606]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffeffb95c90 a2=0 a3=7ffeffb95c7c items=0 ppid=1541 pid=1606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.805000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 May 10 02:15:16.818000 audit[1609]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1609 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.818000 audit[1609]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc9c24a750 a2=0 a3=7ffc9c24a73c items=0 ppid=1541 pid=1609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.818000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E May 10 02:15:16.824000 audit[1612]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1612 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.824000 audit[1612]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd624e8320 a2=0 a3=7ffd624e830c items=0 ppid=1541 pid=1612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.824000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 May 10 02:15:16.827000 audit[1614]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1614 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.827000 audit[1614]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdc259d3d0 a2=0 a3=7ffdc259d3bc items=0 ppid=1541 pid=1614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.827000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 May 10 02:15:16.831000 audit[1616]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1616 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.831000 audit[1616]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffc5c633c10 a2=0 a3=7ffc5c633bfc items=0 ppid=1541 pid=1616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.831000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 May 10 02:15:16.834000 audit[1618]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1618 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.834000 audit[1618]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffc13336c90 a2=0 a3=7ffc13336c7c items=0 ppid=1541 pid=1618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.834000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 May 10 02:15:16.838000 audit[1620]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1620 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.838000 audit[1620]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffce4edf4f0 a2=0 a3=7ffce4edf4dc items=0 ppid=1541 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.838000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 May 10 02:15:16.849000 audit[1623]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1623 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.849000 audit[1623]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffefa32e4f0 a2=0 a3=7ffefa32e4dc items=0 ppid=1541 pid=1623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.849000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 May 10 02:15:16.853000 audit[1625]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1625 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.853000 audit[1625]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffe38c9e8a0 a2=0 a3=7ffe38c9e88c items=0 ppid=1541 pid=1625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.853000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 10 02:15:16.857000 audit[1627]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1627 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.857000 audit[1627]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff4717d990 a2=0 a3=7fff4717d97c items=0 ppid=1541 pid=1627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.857000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 10 02:15:16.860000 audit[1629]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1629 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.860000 audit[1629]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe8c795590 a2=0 a3=7ffe8c79557c items=0 ppid=1541 pid=1629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.860000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 May 10 02:15:16.861968 systemd-networkd[1074]: docker0: Link UP May 10 02:15:16.874000 audit[1633]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1633 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.874000 audit[1633]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff0e8dfa50 a2=0 a3=7fff0e8dfa3c items=0 ppid=1541 pid=1633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.874000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 10 02:15:16.880000 audit[1634]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1634 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:16.880000 audit[1634]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffeaf35ffe0 a2=0 a3=7ffeaf35ffcc items=0 ppid=1541 pid=1634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:16.880000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 10 02:15:16.882217 env[1541]: time="2025-05-10T02:15:16.882024229Z" level=info msg="Loading containers: done." May 10 02:15:16.900798 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3629065458-merged.mount: Deactivated successfully. May 10 02:15:16.911929 env[1541]: time="2025-05-10T02:15:16.911884550Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 02:15:16.912453 env[1541]: time="2025-05-10T02:15:16.912405426Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 10 02:15:16.912790 env[1541]: time="2025-05-10T02:15:16.912751683Z" level=info msg="Daemon has completed initialization" May 10 02:15:16.946366 systemd[1]: Started docker.service. May 10 02:15:16.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:16.957179 env[1541]: time="2025-05-10T02:15:16.957103533Z" level=info msg="API listen on /run/docker.sock" May 10 02:15:18.530228 env[1300]: time="2025-05-10T02:15:18.530076961Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 10 02:15:19.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:19.082813 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 10 02:15:19.569158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3199528259.mount: Deactivated successfully. May 10 02:15:21.301678 kernel: kauditd_printk_skb: 85 callbacks suppressed May 10 02:15:21.301877 kernel: audit: type=1130 audit(1746843321.295:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:21.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:21.295492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 10 02:15:21.295835 systemd[1]: Stopped kubelet.service. May 10 02:15:21.301086 systemd[1]: Starting kubelet.service... May 10 02:15:21.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:21.315692 kernel: audit: type=1131 audit(1746843321.295:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:21.466116 systemd[1]: Started kubelet.service. May 10 02:15:21.473997 kernel: audit: type=1130 audit(1746843321.465:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:21.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:21.545480 kubelet[1685]: E0510 02:15:21.545412 1685 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 02:15:21.548148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 02:15:21.548464 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 02:15:21.554773 kernel: audit: type=1131 audit(1746843321.548:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 10 02:15:21.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 10 02:15:22.799037 env[1300]: time="2025-05-10T02:15:22.798908171Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:22.804062 env[1300]: time="2025-05-10T02:15:22.802585599Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:22.805890 env[1300]: time="2025-05-10T02:15:22.805835906Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:22.808692 env[1300]: time="2025-05-10T02:15:22.808659676Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:22.810152 env[1300]: time="2025-05-10T02:15:22.810095799Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 10 02:15:22.826164 env[1300]: time="2025-05-10T02:15:22.826119168Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 10 02:15:26.175286 env[1300]: time="2025-05-10T02:15:26.175181765Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:26.179624 env[1300]: time="2025-05-10T02:15:26.179580453Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:26.184312 env[1300]: time="2025-05-10T02:15:26.184265748Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:26.186559 env[1300]: time="2025-05-10T02:15:26.186520549Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:26.187959 env[1300]: time="2025-05-10T02:15:26.187921102Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 10 02:15:26.204437 env[1300]: time="2025-05-10T02:15:26.204400722Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 10 02:15:29.366929 env[1300]: time="2025-05-10T02:15:29.366806856Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:29.369570 env[1300]: time="2025-05-10T02:15:29.369530501Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:29.372772 env[1300]: time="2025-05-10T02:15:29.372729159Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:29.376584 env[1300]: time="2025-05-10T02:15:29.376530253Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:29.378017 env[1300]: time="2025-05-10T02:15:29.377975734Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 10 02:15:29.392066 env[1300]: time="2025-05-10T02:15:29.392023542Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 10 02:15:31.742346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount325533893.mount: Deactivated successfully. May 10 02:15:31.743761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 10 02:15:31.752045 kernel: audit: type=1130 audit(1746843331.742:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:31.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:31.743988 systemd[1]: Stopped kubelet.service. May 10 02:15:31.749331 systemd[1]: Starting kubelet.service... May 10 02:15:31.758710 kernel: audit: type=1131 audit(1746843331.742:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:31.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:31.891264 systemd[1]: Started kubelet.service. May 10 02:15:31.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:31.897702 kernel: audit: type=1130 audit(1746843331.890:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:31.987124 kubelet[1716]: E0510 02:15:31.987053 1716 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 02:15:31.989712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 02:15:31.990066 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 02:15:31.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 10 02:15:32.000453 kernel: audit: type=1131 audit(1746843331.989:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 10 02:15:32.945091 env[1300]: time="2025-05-10T02:15:32.944989026Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:32.948430 env[1300]: time="2025-05-10T02:15:32.948370498Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:32.950554 env[1300]: time="2025-05-10T02:15:32.950493464Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:32.952484 env[1300]: time="2025-05-10T02:15:32.952441350Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:32.953384 env[1300]: time="2025-05-10T02:15:32.953319859Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 10 02:15:32.971592 env[1300]: time="2025-05-10T02:15:32.971550023Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 02:15:33.520579 update_engine[1289]: I0510 02:15:33.519762 1289 update_attempter.cc:509] Updating boot flags... May 10 02:15:34.089855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2903830507.mount: Deactivated successfully. May 10 02:15:37.245612 env[1300]: time="2025-05-10T02:15:37.245503466Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:37.249045 env[1300]: time="2025-05-10T02:15:37.248999974Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:37.251879 env[1300]: time="2025-05-10T02:15:37.251808862Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:37.255327 env[1300]: time="2025-05-10T02:15:37.255261186Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:37.256667 env[1300]: time="2025-05-10T02:15:37.256586164Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 10 02:15:37.272958 env[1300]: time="2025-05-10T02:15:37.272918072Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 10 02:15:37.861494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1907893165.mount: Deactivated successfully. May 10 02:15:37.869280 env[1300]: time="2025-05-10T02:15:37.869214302Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:37.870915 env[1300]: time="2025-05-10T02:15:37.870869550Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:37.873651 env[1300]: time="2025-05-10T02:15:37.873591586Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:37.876655 env[1300]: time="2025-05-10T02:15:37.876599361Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:37.877573 env[1300]: time="2025-05-10T02:15:37.877530722Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 10 02:15:37.893205 env[1300]: time="2025-05-10T02:15:37.893120465Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 10 02:15:38.856369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3223077200.mount: Deactivated successfully. May 10 02:15:42.045716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 10 02:15:42.046068 systemd[1]: Stopped kubelet.service. May 10 02:15:42.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:42.061017 kernel: audit: type=1130 audit(1746843342.044:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:42.061182 kernel: audit: type=1131 audit(1746843342.044:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:42.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:42.061230 systemd[1]: Starting kubelet.service... May 10 02:15:42.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:42.219079 systemd[1]: Started kubelet.service. May 10 02:15:42.228003 kernel: audit: type=1130 audit(1746843342.217:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:42.292340 kubelet[1763]: E0510 02:15:42.292243 1763 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 02:15:42.294535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 02:15:42.294913 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 02:15:42.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 10 02:15:42.302134 kernel: audit: type=1131 audit(1746843342.293:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 10 02:15:46.065312 env[1300]: time="2025-05-10T02:15:46.065121992Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:46.069098 env[1300]: time="2025-05-10T02:15:46.069061932Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:46.071660 env[1300]: time="2025-05-10T02:15:46.071595471Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:46.073209 env[1300]: time="2025-05-10T02:15:46.073132828Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 10 02:15:46.075314 env[1300]: time="2025-05-10T02:15:46.074520671Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:50.236769 systemd[1]: Stopped kubelet.service. May 10 02:15:50.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:50.244713 systemd[1]: Starting kubelet.service... May 10 02:15:50.245702 kernel: audit: type=1130 audit(1746843350.235:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:50.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:50.251661 kernel: audit: type=1131 audit(1746843350.235:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:50.285190 systemd[1]: Reloading. May 10 02:15:50.434333 /usr/lib/systemd/system-generators/torcx-generator[1859]: time="2025-05-10T02:15:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 02:15:50.436119 /usr/lib/systemd/system-generators/torcx-generator[1859]: time="2025-05-10T02:15:50Z" level=info msg="torcx already run" May 10 02:15:50.562075 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 02:15:50.562351 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 02:15:50.592327 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 02:15:50.725683 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 10 02:15:50.726137 systemd[1]: kubelet.service: Failed with result 'signal'. May 10 02:15:50.726939 systemd[1]: Stopped kubelet.service. May 10 02:15:50.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 10 02:15:50.735670 kernel: audit: type=1130 audit(1746843350.726:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 10 02:15:50.739821 systemd[1]: Starting kubelet.service... May 10 02:15:50.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:50.928893 systemd[1]: Started kubelet.service. May 10 02:15:50.936690 kernel: audit: type=1130 audit(1746843350.928:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:51.033941 kubelet[1920]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 02:15:51.034567 kubelet[1920]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 02:15:51.034771 kubelet[1920]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 02:15:51.035095 kubelet[1920]: I0510 02:15:51.035036 1920 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 02:15:51.551673 kubelet[1920]: I0510 02:15:51.551571 1920 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 02:15:51.551673 kubelet[1920]: I0510 02:15:51.551655 1920 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 02:15:51.552062 kubelet[1920]: I0510 02:15:51.552031 1920 server.go:927] "Client rotation is on, will bootstrap in background" May 10 02:15:51.576309 kubelet[1920]: I0510 02:15:51.576265 1920 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 02:15:51.576948 kubelet[1920]: E0510 02:15:51.576895 1920 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.33.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:51.604342 kubelet[1920]: I0510 02:15:51.604286 1920 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 02:15:51.606672 kubelet[1920]: I0510 02:15:51.606592 1920 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 02:15:51.607020 kubelet[1920]: I0510 02:15:51.606673 1920 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-it8yl.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 02:15:51.607795 kubelet[1920]: I0510 02:15:51.607763 1920 topology_manager.go:138] "Creating topology manager with none policy" May 10 02:15:51.607795 kubelet[1920]: I0510 02:15:51.607794 1920 container_manager_linux.go:301] "Creating device plugin manager" May 10 02:15:51.608086 kubelet[1920]: I0510 02:15:51.608048 1920 state_mem.go:36] "Initialized new in-memory state store" May 10 02:15:51.609278 kubelet[1920]: I0510 02:15:51.609232 1920 kubelet.go:400] "Attempting to sync node with API server" May 10 02:15:51.609278 kubelet[1920]: I0510 02:15:51.609265 1920 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 02:15:51.609452 kubelet[1920]: I0510 02:15:51.609322 1920 kubelet.go:312] "Adding apiserver pod source" May 10 02:15:51.609452 kubelet[1920]: I0510 02:15:51.609366 1920 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 02:15:51.619503 kubelet[1920]: W0510 02:15:51.619420 1920 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.33.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-it8yl.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:51.619606 kubelet[1920]: E0510 02:15:51.619524 1920 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.33.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-it8yl.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:51.619730 kubelet[1920]: W0510 02:15:51.619645 1920 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.33.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:51.619730 kubelet[1920]: E0510 02:15:51.619708 1920 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.33.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:51.619909 kubelet[1920]: I0510 02:15:51.619876 1920 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 02:15:51.622774 kubelet[1920]: I0510 02:15:51.622736 1920 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 02:15:51.622880 kubelet[1920]: W0510 02:15:51.622856 1920 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 02:15:51.626450 kubelet[1920]: I0510 02:15:51.626411 1920 server.go:1264] "Started kubelet" May 10 02:15:51.646384 kubelet[1920]: I0510 02:15:51.646332 1920 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 02:15:51.648973 kubelet[1920]: I0510 02:15:51.648930 1920 server.go:455] "Adding debug handlers to kubelet server" May 10 02:15:51.649000 audit[1920]: AVC avc: denied { mac_admin } for pid=1920 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:15:51.652444 kubelet[1920]: I0510 02:15:51.652356 1920 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 02:15:51.652931 kubelet[1920]: I0510 02:15:51.652895 1920 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 02:15:51.653329 kubelet[1920]: E0510 02:15:51.653162 1920 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.33.70:6443/api/v1/namespaces/default/events\": dial tcp 10.230.33.70:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-it8yl.gb1.brightbox.com.183e08c5ebf14315 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-it8yl.gb1.brightbox.com,UID:srv-it8yl.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-it8yl.gb1.brightbox.com,},FirstTimestamp:2025-05-10 02:15:51.626367765 +0000 UTC m=+0.681825595,LastTimestamp:2025-05-10 02:15:51.626367765 +0000 UTC m=+0.681825595,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-it8yl.gb1.brightbox.com,}" May 10 02:15:51.649000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 10 02:15:51.660042 kernel: audit: type=1400 audit(1746843351.649:221): avc: denied { mac_admin } for pid=1920 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:15:51.660154 kernel: audit: type=1401 audit(1746843351.649:221): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 10 02:15:51.660361 kubelet[1920]: I0510 02:15:51.660323 1920 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 10 02:15:51.660560 kubelet[1920]: I0510 02:15:51.660517 1920 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 10 02:15:51.660950 kubelet[1920]: I0510 02:15:51.660918 1920 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 02:15:51.649000 audit[1920]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b11e90 a1=c0009b4648 a2=c000b11e60 a3=25 items=0 ppid=1 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:51.664618 kubelet[1920]: E0510 02:15:51.664591 1920 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 02:15:51.669582 kubelet[1920]: E0510 02:15:51.669540 1920 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-it8yl.gb1.brightbox.com\" not found" May 10 02:15:51.669719 kernel: audit: type=1300 audit(1746843351.649:221): arch=c000003e syscall=188 success=no exit=-22 a0=c000b11e90 a1=c0009b4648 a2=c000b11e60 a3=25 items=0 ppid=1 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:51.670358 kubelet[1920]: I0510 02:15:51.670335 1920 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 02:15:51.670681 kubelet[1920]: I0510 02:15:51.670657 1920 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 02:15:51.670928 kubelet[1920]: I0510 02:15:51.670908 1920 reconciler.go:26] "Reconciler: start to sync state" May 10 02:15:51.649000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 10 02:15:51.671910 kubelet[1920]: W0510 02:15:51.671865 1920 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.33.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:51.672069 kubelet[1920]: E0510 02:15:51.672044 1920 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.33.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:51.672292 kubelet[1920]: E0510 02:15:51.672256 1920 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.33.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-it8yl.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.33.70:6443: connect: connection refused" interval="200ms" May 10 02:15:51.659000 audit[1920]: AVC avc: denied { mac_admin } for pid=1920 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:15:51.678994 kubelet[1920]: I0510 02:15:51.678955 1920 factory.go:221] Registration of the systemd container factory successfully May 10 02:15:51.679224 kubelet[1920]: I0510 02:15:51.679195 1920 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 02:15:51.681376 kubelet[1920]: I0510 02:15:51.681355 1920 factory.go:221] Registration of the containerd container factory successfully May 10 02:15:51.683815 kernel: audit: type=1327 audit(1746843351.649:221): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 10 02:15:51.683896 kernel: audit: type=1400 audit(1746843351.659:222): avc: denied { mac_admin } for pid=1920 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:15:51.683981 kernel: audit: type=1401 audit(1746843351.659:222): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 10 02:15:51.659000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 10 02:15:51.659000 audit[1920]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b5d860 a1=c0009b4660 a2=c000b11f20 a3=25 items=0 ppid=1 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:51.659000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 10 02:15:51.670000 audit[1930]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:51.670000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd1dcf7ea0 a2=0 a3=7ffd1dcf7e8c items=0 ppid=1920 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:51.670000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 10 02:15:51.687000 audit[1931]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:51.687000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf36925f0 a2=0 a3=7ffcf36925dc items=0 ppid=1920 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:51.687000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 10 02:15:51.696000 audit[1935]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:51.696000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcfff30990 a2=0 a3=7ffcfff3097c items=0 ppid=1920 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:51.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 10 02:15:51.709000 audit[1938]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:51.709000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff2f0fb980 a2=0 a3=7fff2f0fb96c items=0 ppid=1920 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:51.709000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 10 02:15:51.730859 kubelet[1920]: I0510 02:15:51.730818 1920 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 02:15:51.730859 kubelet[1920]: I0510 02:15:51.730848 1920 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 02:15:51.732508 kubelet[1920]: I0510 02:15:51.730880 1920 state_mem.go:36] "Initialized new in-memory state store" May 10 02:15:51.733047 kubelet[1920]: I0510 02:15:51.733023 1920 policy_none.go:49] "None policy: Start" May 10 02:15:51.734724 kubelet[1920]: I0510 02:15:51.734690 1920 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 02:15:51.734977 kubelet[1920]: I0510 02:15:51.734924 1920 state_mem.go:35] "Initializing new in-memory state store" May 10 02:15:51.736000 audit[1943]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:51.736000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff61e37fe0 a2=0 a3=7fff61e37fcc items=0 ppid=1920 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:51.736000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 May 10 02:15:51.738206 kubelet[1920]: I0510 02:15:51.738139 1920 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 02:15:51.739000 audit[1945]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:15:51.739000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffbc2c24a0 a2=0 a3=7fffbc2c248c items=0 ppid=1920 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:51.739000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 10 02:15:51.740000 audit[1946]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:51.740000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9e5f14d0 a2=0 a3=7ffc9e5f14bc items=0 ppid=1920 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:51.740000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 10 02:15:51.742000 audit[1947]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:51.742000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0a6cd6d0 a2=0 a3=7ffd0a6cd6bc items=0 ppid=1920 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:51.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 10 02:15:51.743456 kubelet[1920]: I0510 02:15:51.743342 1920 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 02:15:51.743456 kubelet[1920]: I0510 02:15:51.743377 1920 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 02:15:51.743456 kubelet[1920]: I0510 02:15:51.743406 1920 kubelet.go:2337] "Starting kubelet main sync loop" May 10 02:15:51.743676 kubelet[1920]: E0510 02:15:51.743475 1920 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 02:15:51.746610 kubelet[1920]: I0510 02:15:51.746581 1920 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 02:15:51.746000 audit[1920]: AVC avc: denied { mac_admin } for pid=1920 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:15:51.746000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 10 02:15:51.746000 audit[1920]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f20b70 a1=c000f090e0 a2=c000f20b40 a3=25 items=0 ppid=1 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:51.746000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 10 02:15:51.747404 kubelet[1920]: I0510 02:15:51.747350 1920 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 10 02:15:51.748000 audit[1949]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:15:51.750012 kubelet[1920]: I0510 02:15:51.749689 1920 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 02:15:51.750012 kubelet[1920]: I0510 02:15:51.749883 1920 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 02:15:51.748000 audit[1949]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe24d057b0 a2=0 a3=7ffe24d0579c items=0 ppid=1920 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:51.748000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 10 02:15:51.751654 kubelet[1920]: W0510 02:15:51.751549 1920 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.33.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:51.751769 kubelet[1920]: E0510 02:15:51.751674 1920 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.33.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:51.752930 kubelet[1920]: E0510 02:15:51.752901 1920 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-it8yl.gb1.brightbox.com\" not found" May 10 02:15:51.754000 audit[1950]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:15:51.754000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff1a29f420 a2=0 a3=7fff1a29f40c items=0 ppid=1920 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:51.754000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 10 02:15:51.755000 audit[1948]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:15:51.755000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc7239710 a2=0 a3=7fffc72396fc items=0 ppid=1920 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:51.755000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 10 02:15:51.756000 audit[1951]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:15:51.756000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd8be73ae0 a2=0 a3=7ffd8be73acc items=0 ppid=1920 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:15:51.756000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 10 02:15:51.774419 kubelet[1920]: I0510 02:15:51.774373 1920 kubelet_node_status.go:73] "Attempting to register node" node="srv-it8yl.gb1.brightbox.com" May 10 02:15:51.775234 kubelet[1920]: E0510 02:15:51.775185 1920 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.33.70:6443/api/v1/nodes\": dial tcp 10.230.33.70:6443: connect: connection refused" node="srv-it8yl.gb1.brightbox.com" May 10 02:15:51.844279 kubelet[1920]: I0510 02:15:51.844065 1920 topology_manager.go:215] "Topology Admit Handler" podUID="b3b5e1a762b0e0b1a86669c4a05c1056" podNamespace="kube-system" podName="kube-scheduler-srv-it8yl.gb1.brightbox.com" May 10 02:15:51.849651 kubelet[1920]: I0510 02:15:51.849597 1920 topology_manager.go:215] "Topology Admit Handler" podUID="4023df08c7c2d7cd8d2940e6bde23fba" podNamespace="kube-system" podName="kube-apiserver-srv-it8yl.gb1.brightbox.com" May 10 02:15:51.852615 kubelet[1920]: I0510 02:15:51.852583 1920 topology_manager.go:215] "Topology Admit Handler" podUID="3f39851ab8dea1dd70e96e72fe31fc98" podNamespace="kube-system" podName="kube-controller-manager-srv-it8yl.gb1.brightbox.com" May 10 02:15:51.873089 kubelet[1920]: I0510 02:15:51.873042 1920 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4023df08c7c2d7cd8d2940e6bde23fba-ca-certs\") pod \"kube-apiserver-srv-it8yl.gb1.brightbox.com\" (UID: \"4023df08c7c2d7cd8d2940e6bde23fba\") " pod="kube-system/kube-apiserver-srv-it8yl.gb1.brightbox.com" May 10 02:15:51.873257 kubelet[1920]: I0510 02:15:51.873097 1920 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4023df08c7c2d7cd8d2940e6bde23fba-k8s-certs\") pod \"kube-apiserver-srv-it8yl.gb1.brightbox.com\" (UID: \"4023df08c7c2d7cd8d2940e6bde23fba\") " pod="kube-system/kube-apiserver-srv-it8yl.gb1.brightbox.com" May 10 02:15:51.873257 kubelet[1920]: I0510 02:15:51.873132 1920 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f39851ab8dea1dd70e96e72fe31fc98-k8s-certs\") pod \"kube-controller-manager-srv-it8yl.gb1.brightbox.com\" (UID: \"3f39851ab8dea1dd70e96e72fe31fc98\") " pod="kube-system/kube-controller-manager-srv-it8yl.gb1.brightbox.com" May 10 02:15:51.873257 kubelet[1920]: I0510 02:15:51.873161 1920 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f39851ab8dea1dd70e96e72fe31fc98-kubeconfig\") pod \"kube-controller-manager-srv-it8yl.gb1.brightbox.com\" (UID: \"3f39851ab8dea1dd70e96e72fe31fc98\") " pod="kube-system/kube-controller-manager-srv-it8yl.gb1.brightbox.com" May 10 02:15:51.873257 kubelet[1920]: I0510 02:15:51.873189 1920 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4023df08c7c2d7cd8d2940e6bde23fba-usr-share-ca-certificates\") pod \"kube-apiserver-srv-it8yl.gb1.brightbox.com\" (UID: \"4023df08c7c2d7cd8d2940e6bde23fba\") " pod="kube-system/kube-apiserver-srv-it8yl.gb1.brightbox.com" May 10 02:15:51.873257 kubelet[1920]: I0510 02:15:51.873217 1920 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f39851ab8dea1dd70e96e72fe31fc98-ca-certs\") pod \"kube-controller-manager-srv-it8yl.gb1.brightbox.com\" (UID: \"3f39851ab8dea1dd70e96e72fe31fc98\") " pod="kube-system/kube-controller-manager-srv-it8yl.gb1.brightbox.com" May 10 02:15:51.873544 kubelet[1920]: I0510 02:15:51.873257 1920 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f39851ab8dea1dd70e96e72fe31fc98-flexvolume-dir\") pod \"kube-controller-manager-srv-it8yl.gb1.brightbox.com\" (UID: \"3f39851ab8dea1dd70e96e72fe31fc98\") " pod="kube-system/kube-controller-manager-srv-it8yl.gb1.brightbox.com" May 10 02:15:51.873544 kubelet[1920]: I0510 02:15:51.873286 1920 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f39851ab8dea1dd70e96e72fe31fc98-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-it8yl.gb1.brightbox.com\" (UID: \"3f39851ab8dea1dd70e96e72fe31fc98\") " pod="kube-system/kube-controller-manager-srv-it8yl.gb1.brightbox.com" May 10 02:15:51.873544 kubelet[1920]: I0510 02:15:51.873348 1920 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3b5e1a762b0e0b1a86669c4a05c1056-kubeconfig\") pod \"kube-scheduler-srv-it8yl.gb1.brightbox.com\" (UID: \"b3b5e1a762b0e0b1a86669c4a05c1056\") " pod="kube-system/kube-scheduler-srv-it8yl.gb1.brightbox.com" May 10 02:15:51.874162 kubelet[1920]: E0510 02:15:51.874119 1920 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.33.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-it8yl.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.33.70:6443: connect: connection refused" interval="400ms" May 10 02:15:51.979133 kubelet[1920]: I0510 02:15:51.979087 1920 kubelet_node_status.go:73] "Attempting to register node" node="srv-it8yl.gb1.brightbox.com" May 10 02:15:51.980621 kubelet[1920]: E0510 02:15:51.980586 1920 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.33.70:6443/api/v1/nodes\": dial tcp 10.230.33.70:6443: connect: connection refused" node="srv-it8yl.gb1.brightbox.com" May 10 02:15:52.166295 env[1300]: time="2025-05-10T02:15:52.165678366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-it8yl.gb1.brightbox.com,Uid:4023df08c7c2d7cd8d2940e6bde23fba,Namespace:kube-system,Attempt:0,}" May 10 02:15:52.167579 env[1300]: time="2025-05-10T02:15:52.167528020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-it8yl.gb1.brightbox.com,Uid:3f39851ab8dea1dd70e96e72fe31fc98,Namespace:kube-system,Attempt:0,}" May 10 02:15:52.169110 env[1300]: time="2025-05-10T02:15:52.168888299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-it8yl.gb1.brightbox.com,Uid:b3b5e1a762b0e0b1a86669c4a05c1056,Namespace:kube-system,Attempt:0,}" May 10 02:15:52.275885 kubelet[1920]: E0510 02:15:52.275811 1920 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.33.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-it8yl.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.33.70:6443: connect: connection refused" interval="800ms" May 10 02:15:52.384656 kubelet[1920]: I0510 02:15:52.384153 1920 kubelet_node_status.go:73] "Attempting to register node" node="srv-it8yl.gb1.brightbox.com" May 10 02:15:52.384656 kubelet[1920]: E0510 02:15:52.384557 1920 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.33.70:6443/api/v1/nodes\": dial tcp 10.230.33.70:6443: connect: connection refused" node="srv-it8yl.gb1.brightbox.com" May 10 02:15:52.617287 kubelet[1920]: W0510 02:15:52.617170 1920 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.33.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:52.617287 kubelet[1920]: E0510 02:15:52.617236 1920 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.33.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:52.705236 kubelet[1920]: W0510 02:15:52.705141 1920 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.33.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-it8yl.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:52.705236 kubelet[1920]: E0510 02:15:52.705200 1920 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.33.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-it8yl.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:52.842623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1358527413.mount: Deactivated successfully. May 10 02:15:52.853764 env[1300]: time="2025-05-10T02:15:52.853711206Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:52.858448 env[1300]: time="2025-05-10T02:15:52.858360801Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:52.861019 env[1300]: time="2025-05-10T02:15:52.860982655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:52.862146 env[1300]: time="2025-05-10T02:15:52.862112313Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:52.864961 env[1300]: time="2025-05-10T02:15:52.864911769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:52.867559 env[1300]: time="2025-05-10T02:15:52.867452368Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:52.870227 env[1300]: time="2025-05-10T02:15:52.870187864Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:52.871553 env[1300]: time="2025-05-10T02:15:52.871508196Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:52.877182 env[1300]: time="2025-05-10T02:15:52.877141018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:52.879735 kubelet[1920]: W0510 02:15:52.879693 1920 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.33.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:52.879854 kubelet[1920]: E0510 02:15:52.879770 1920 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.33.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:52.881260 env[1300]: time="2025-05-10T02:15:52.881195774Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:52.882229 env[1300]: time="2025-05-10T02:15:52.882185272Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:52.884602 env[1300]: time="2025-05-10T02:15:52.884560331Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:15:52.934596 env[1300]: time="2025-05-10T02:15:52.934489590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 02:15:52.934856 env[1300]: time="2025-05-10T02:15:52.934575335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 02:15:52.934856 env[1300]: time="2025-05-10T02:15:52.934593029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 02:15:52.935328 env[1300]: time="2025-05-10T02:15:52.935261537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 02:15:52.935517 env[1300]: time="2025-05-10T02:15:52.935463505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 02:15:52.935823 env[1300]: time="2025-05-10T02:15:52.935768573Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49ab9b4e9534571cc7ad8601e0e5c633449bae82845e608245878ccaaf776be9 pid=1961 runtime=io.containerd.runc.v2 May 10 02:15:52.936043 env[1300]: time="2025-05-10T02:15:52.935988330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 02:15:52.936673 env[1300]: time="2025-05-10T02:15:52.936607780Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/203263e93ccfba65638223c851f5ecce3bbce16b0fd6fe9557454491b2621e09 pid=1976 runtime=io.containerd.runc.v2 May 10 02:15:52.943221 env[1300]: time="2025-05-10T02:15:52.943135034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 02:15:52.943398 env[1300]: time="2025-05-10T02:15:52.943196265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 02:15:52.943398 env[1300]: time="2025-05-10T02:15:52.943213439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 02:15:52.943601 env[1300]: time="2025-05-10T02:15:52.943467214Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6adcaf0b72991874aee0b4969749fa37b7218c31becd4cd26114f7f68c5df02e pid=1986 runtime=io.containerd.runc.v2 May 10 02:15:53.017567 kubelet[1920]: W0510 02:15:53.017375 1920 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.33.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:53.017567 kubelet[1920]: E0510 02:15:53.017473 1920 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.33.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:53.077176 kubelet[1920]: E0510 02:15:53.077113 1920 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.33.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-it8yl.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.33.70:6443: connect: connection refused" interval="1.6s" May 10 02:15:53.083340 env[1300]: time="2025-05-10T02:15:53.083270634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-it8yl.gb1.brightbox.com,Uid:3f39851ab8dea1dd70e96e72fe31fc98,Namespace:kube-system,Attempt:0,} returns sandbox id \"203263e93ccfba65638223c851f5ecce3bbce16b0fd6fe9557454491b2621e09\"" May 10 02:15:53.090231 env[1300]: time="2025-05-10T02:15:53.090193188Z" level=info msg="CreateContainer within sandbox \"203263e93ccfba65638223c851f5ecce3bbce16b0fd6fe9557454491b2621e09\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 02:15:53.105152 env[1300]: time="2025-05-10T02:15:53.104921702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-it8yl.gb1.brightbox.com,Uid:4023df08c7c2d7cd8d2940e6bde23fba,Namespace:kube-system,Attempt:0,} returns sandbox id \"49ab9b4e9534571cc7ad8601e0e5c633449bae82845e608245878ccaaf776be9\"" May 10 02:15:53.108737 env[1300]: time="2025-05-10T02:15:53.108702970Z" level=info msg="CreateContainer within sandbox \"49ab9b4e9534571cc7ad8601e0e5c633449bae82845e608245878ccaaf776be9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 02:15:53.122931 env[1300]: time="2025-05-10T02:15:53.122774210Z" level=info msg="CreateContainer within sandbox \"203263e93ccfba65638223c851f5ecce3bbce16b0fd6fe9557454491b2621e09\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ec9547f6cd7bda2cf7351bde40db28c9f0b7d450c92588f56b6cfac26ab1e99f\"" May 10 02:15:53.124402 env[1300]: time="2025-05-10T02:15:53.124355134Z" level=info msg="StartContainer for \"ec9547f6cd7bda2cf7351bde40db28c9f0b7d450c92588f56b6cfac26ab1e99f\"" May 10 02:15:53.126291 env[1300]: time="2025-05-10T02:15:53.126253546Z" level=info msg="CreateContainer within sandbox \"49ab9b4e9534571cc7ad8601e0e5c633449bae82845e608245878ccaaf776be9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ca90d7c52eaa652c633a6876f1830b74d71d7bfeed3a03a2f8f222162262837e\"" May 10 02:15:53.126926 env[1300]: time="2025-05-10T02:15:53.126874964Z" level=info msg="StartContainer for \"ca90d7c52eaa652c633a6876f1830b74d71d7bfeed3a03a2f8f222162262837e\"" May 10 02:15:53.133972 env[1300]: time="2025-05-10T02:15:53.133930368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-it8yl.gb1.brightbox.com,Uid:b3b5e1a762b0e0b1a86669c4a05c1056,Namespace:kube-system,Attempt:0,} returns sandbox id \"6adcaf0b72991874aee0b4969749fa37b7218c31becd4cd26114f7f68c5df02e\"" May 10 02:15:53.136864 env[1300]: time="2025-05-10T02:15:53.136819581Z" level=info msg="CreateContainer within sandbox \"6adcaf0b72991874aee0b4969749fa37b7218c31becd4cd26114f7f68c5df02e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 02:15:53.149621 env[1300]: time="2025-05-10T02:15:53.149567929Z" level=info msg="CreateContainer within sandbox \"6adcaf0b72991874aee0b4969749fa37b7218c31becd4cd26114f7f68c5df02e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"778b24687054368c6f86077781786844d15a8229a71b18b318252ad1403a4801\"" May 10 02:15:53.150510 env[1300]: time="2025-05-10T02:15:53.150477000Z" level=info msg="StartContainer for \"778b24687054368c6f86077781786844d15a8229a71b18b318252ad1403a4801\"" May 10 02:15:53.198842 kubelet[1920]: I0510 02:15:53.198466 1920 kubelet_node_status.go:73] "Attempting to register node" node="srv-it8yl.gb1.brightbox.com" May 10 02:15:53.199425 kubelet[1920]: E0510 02:15:53.199370 1920 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.33.70:6443/api/v1/nodes\": dial tcp 10.230.33.70:6443: connect: connection refused" node="srv-it8yl.gb1.brightbox.com" May 10 02:15:53.294802 env[1300]: time="2025-05-10T02:15:53.294736375Z" level=info msg="StartContainer for \"ca90d7c52eaa652c633a6876f1830b74d71d7bfeed3a03a2f8f222162262837e\" returns successfully" May 10 02:15:53.313169 env[1300]: time="2025-05-10T02:15:53.313108057Z" level=info msg="StartContainer for \"ec9547f6cd7bda2cf7351bde40db28c9f0b7d450c92588f56b6cfac26ab1e99f\" returns successfully" May 10 02:15:53.357154 env[1300]: time="2025-05-10T02:15:53.357090844Z" level=info msg="StartContainer for \"778b24687054368c6f86077781786844d15a8229a71b18b318252ad1403a4801\" returns successfully" May 10 02:15:53.744528 kubelet[1920]: E0510 02:15:53.744482 1920 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.33.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.33.70:6443: connect: connection refused May 10 02:15:54.802872 kubelet[1920]: I0510 02:15:54.802828 1920 kubelet_node_status.go:73] "Attempting to register node" node="srv-it8yl.gb1.brightbox.com" May 10 02:15:55.971605 kubelet[1920]: E0510 02:15:55.971552 1920 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-it8yl.gb1.brightbox.com\" not found" node="srv-it8yl.gb1.brightbox.com" May 10 02:15:56.058040 kubelet[1920]: I0510 02:15:56.057973 1920 kubelet_node_status.go:76] "Successfully registered node" node="srv-it8yl.gb1.brightbox.com" May 10 02:15:56.617220 kubelet[1920]: I0510 02:15:56.617095 1920 apiserver.go:52] "Watching apiserver" May 10 02:15:56.671934 kubelet[1920]: I0510 02:15:56.671902 1920 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 02:15:56.892239 kubelet[1920]: W0510 02:15:56.891989 1920 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 02:15:58.428112 systemd[1]: Reloading. May 10 02:15:58.535476 /usr/lib/systemd/system-generators/torcx-generator[2208]: time="2025-05-10T02:15:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 02:15:58.536245 /usr/lib/systemd/system-generators/torcx-generator[2208]: time="2025-05-10T02:15:58Z" level=info msg="torcx already run" May 10 02:15:58.719506 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 02:15:58.719538 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 02:15:58.759788 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 02:15:58.933415 systemd[1]: Stopping kubelet.service... May 10 02:15:58.952320 systemd[1]: kubelet.service: Deactivated successfully. May 10 02:15:58.952784 systemd[1]: Stopped kubelet.service. May 10 02:15:58.963912 kernel: kauditd_printk_skb: 42 callbacks suppressed May 10 02:15:58.964136 kernel: audit: type=1131 audit(1746843358.952:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:58.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:15:58.961439 systemd[1]: Starting kubelet.service... May 10 02:16:00.176111 systemd[1]: Started kubelet.service. May 10 02:16:00.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:16:00.186356 kernel: audit: type=1130 audit(1746843360.176:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:16:00.328764 kubelet[2271]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 02:16:00.328764 kubelet[2271]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 02:16:00.329477 kubelet[2271]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 02:16:00.345193 kubelet[2271]: I0510 02:16:00.345040 2271 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 02:16:00.354981 kubelet[2271]: I0510 02:16:00.354928 2271 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 02:16:00.355168 kubelet[2271]: I0510 02:16:00.355143 2271 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 02:16:00.355533 kubelet[2271]: I0510 02:16:00.355508 2271 server.go:927] "Client rotation is on, will bootstrap in background" May 10 02:16:00.357549 kubelet[2271]: I0510 02:16:00.357524 2271 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 02:16:00.359592 kubelet[2271]: I0510 02:16:00.359553 2271 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 02:16:00.375799 kubelet[2271]: I0510 02:16:00.375764 2271 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 02:16:00.378131 kubelet[2271]: I0510 02:16:00.378074 2271 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 02:16:00.378575 kubelet[2271]: I0510 02:16:00.378281 2271 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-it8yl.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 02:16:00.378839 kubelet[2271]: I0510 02:16:00.378813 2271 topology_manager.go:138] "Creating topology manager with none policy" May 10 02:16:00.378980 kubelet[2271]: I0510 02:16:00.378958 2271 container_manager_linux.go:301] "Creating device plugin manager" May 10 02:16:00.379212 kubelet[2271]: I0510 02:16:00.379178 2271 state_mem.go:36] "Initialized new in-memory state store" May 10 02:16:00.379569 kubelet[2271]: I0510 02:16:00.379535 2271 kubelet.go:400] "Attempting to sync node with API server" May 10 02:16:00.379795 kubelet[2271]: I0510 02:16:00.379770 2271 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 02:16:00.380050 kubelet[2271]: I0510 02:16:00.380028 2271 kubelet.go:312] "Adding apiserver pod source" May 10 02:16:00.380363 kubelet[2271]: I0510 02:16:00.380317 2271 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 02:16:00.391453 kubelet[2271]: I0510 02:16:00.391427 2271 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 02:16:00.391891 kubelet[2271]: I0510 02:16:00.391867 2271 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 02:16:00.392632 kubelet[2271]: I0510 02:16:00.392610 2271 server.go:1264] "Started kubelet" May 10 02:16:00.395000 audit[2271]: AVC avc: denied { mac_admin } for pid=2271 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:00.396445 kubelet[2271]: I0510 02:16:00.396401 2271 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 10 02:16:00.396657 kubelet[2271]: I0510 02:16:00.396611 2271 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 10 02:16:00.396838 kubelet[2271]: I0510 02:16:00.396803 2271 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 02:16:00.401657 kernel: audit: type=1400 audit(1746843360.395:238): avc: denied { mac_admin } for pid=2271 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:00.395000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 10 02:16:00.407006 kernel: audit: type=1401 audit(1746843360.395:238): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 10 02:16:00.422767 kernel: audit: type=1300 audit(1746843360.395:238): arch=c000003e syscall=188 success=no exit=-22 a0=c000bba270 a1=c000857578 a2=c000bba240 a3=25 items=0 ppid=1 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:00.395000 audit[2271]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bba270 a1=c000857578 a2=c000bba240 a3=25 items=0 ppid=1 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:00.423104 kubelet[2271]: I0510 02:16:00.422487 2271 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 02:16:00.427841 kubelet[2271]: I0510 02:16:00.426242 2271 server.go:455] "Adding debug handlers to kubelet server" May 10 02:16:00.427841 kubelet[2271]: I0510 02:16:00.427498 2271 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 02:16:00.430198 kubelet[2271]: I0510 02:16:00.429458 2271 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 02:16:00.395000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 10 02:16:00.440145 kernel: audit: type=1327 audit(1746843360.395:238): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 10 02:16:00.440221 kubelet[2271]: I0510 02:16:00.440175 2271 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 02:16:00.441451 kubelet[2271]: I0510 02:16:00.441301 2271 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 02:16:00.441740 kubelet[2271]: I0510 02:16:00.441515 2271 reconciler.go:26] "Reconciler: start to sync state" May 10 02:16:00.395000 audit[2271]: AVC avc: denied { mac_admin } for pid=2271 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:00.395000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 10 02:16:00.452910 kernel: audit: type=1400 audit(1746843360.395:239): avc: denied { mac_admin } for pid=2271 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:00.453015 kernel: audit: type=1401 audit(1746843360.395:239): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 10 02:16:00.395000 audit[2271]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00096c5c0 a1=c000857590 a2=c000bba300 a3=25 items=0 ppid=1 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:00.470520 kernel: audit: type=1300 audit(1746843360.395:239): arch=c000003e syscall=188 success=no exit=-22 a0=c00096c5c0 a1=c000857590 a2=c000bba300 a3=25 items=0 ppid=1 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:00.395000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 10 02:16:00.476110 kubelet[2271]: I0510 02:16:00.475403 2271 factory.go:221] Registration of the systemd container factory successfully May 10 02:16:00.476110 kubelet[2271]: I0510 02:16:00.475570 2271 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 02:16:00.486310 kernel: audit: type=1327 audit(1746843360.395:239): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 10 02:16:00.491227 kubelet[2271]: I0510 02:16:00.490791 2271 factory.go:221] Registration of the containerd container factory successfully May 10 02:16:00.493780 kubelet[2271]: E0510 02:16:00.493750 2271 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 02:16:00.507686 kubelet[2271]: I0510 02:16:00.507614 2271 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 02:16:00.515707 kubelet[2271]: I0510 02:16:00.515682 2271 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 02:16:00.515877 kubelet[2271]: I0510 02:16:00.515854 2271 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 02:16:00.516020 kubelet[2271]: I0510 02:16:00.515998 2271 kubelet.go:2337] "Starting kubelet main sync loop" May 10 02:16:00.516270 kubelet[2271]: E0510 02:16:00.516239 2271 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 02:16:00.566482 kubelet[2271]: I0510 02:16:00.562935 2271 kubelet_node_status.go:73] "Attempting to register node" node="srv-it8yl.gb1.brightbox.com" May 10 02:16:00.579969 kubelet[2271]: I0510 02:16:00.579931 2271 kubelet_node_status.go:112] "Node was previously registered" node="srv-it8yl.gb1.brightbox.com" May 10 02:16:00.580272 kubelet[2271]: I0510 02:16:00.580250 2271 kubelet_node_status.go:76] "Successfully registered node" node="srv-it8yl.gb1.brightbox.com" May 10 02:16:00.617364 kubelet[2271]: E0510 02:16:00.617326 2271 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 10 02:16:00.673442 kubelet[2271]: I0510 02:16:00.673409 2271 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 02:16:00.673813 kubelet[2271]: I0510 02:16:00.673786 2271 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 02:16:00.674109 kubelet[2271]: I0510 02:16:00.674087 2271 state_mem.go:36] "Initialized new in-memory state store" May 10 02:16:00.674530 kubelet[2271]: I0510 02:16:00.674505 2271 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 02:16:00.674723 kubelet[2271]: I0510 02:16:00.674679 2271 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 02:16:00.674870 kubelet[2271]: I0510 02:16:00.674848 2271 policy_none.go:49] "None policy: Start" May 10 02:16:00.676217 kubelet[2271]: I0510 02:16:00.676193 2271 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 02:16:00.676409 kubelet[2271]: I0510 02:16:00.676387 2271 state_mem.go:35] "Initializing new in-memory state store" May 10 02:16:00.676781 kubelet[2271]: I0510 02:16:00.676757 2271 state_mem.go:75] "Updated machine memory state" May 10 02:16:00.681460 kubelet[2271]: I0510 02:16:00.679206 2271 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 02:16:00.680000 audit[2271]: AVC avc: denied { mac_admin } for pid=2271 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:00.680000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 10 02:16:00.681981 kubelet[2271]: I0510 02:16:00.681929 2271 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 10 02:16:00.682516 kubelet[2271]: I0510 02:16:00.682436 2271 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 02:16:00.680000 audit[2271]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0011aec00 a1=c00118db48 a2=c0011aebd0 a3=25 items=0 ppid=1 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:00.680000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 10 02:16:00.683787 kubelet[2271]: I0510 02:16:00.683766 2271 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 02:16:00.818255 kubelet[2271]: I0510 02:16:00.818170 2271 topology_manager.go:215] "Topology Admit Handler" podUID="4023df08c7c2d7cd8d2940e6bde23fba" podNamespace="kube-system" podName="kube-apiserver-srv-it8yl.gb1.brightbox.com" May 10 02:16:00.818582 kubelet[2271]: I0510 02:16:00.818552 2271 topology_manager.go:215] "Topology Admit Handler" podUID="3f39851ab8dea1dd70e96e72fe31fc98" podNamespace="kube-system" podName="kube-controller-manager-srv-it8yl.gb1.brightbox.com" May 10 02:16:00.819004 kubelet[2271]: I0510 02:16:00.818973 2271 topology_manager.go:215] "Topology Admit Handler" podUID="b3b5e1a762b0e0b1a86669c4a05c1056" podNamespace="kube-system" podName="kube-scheduler-srv-it8yl.gb1.brightbox.com" May 10 02:16:00.830226 kubelet[2271]: W0510 02:16:00.830182 2271 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 02:16:00.830834 kubelet[2271]: W0510 02:16:00.830806 2271 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 02:16:00.850848 kubelet[2271]: W0510 02:16:00.850804 2271 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 02:16:00.851172 kubelet[2271]: E0510 02:16:00.851140 2271 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-srv-it8yl.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-it8yl.gb1.brightbox.com" May 10 02:16:00.946599 kubelet[2271]: I0510 02:16:00.946417 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4023df08c7c2d7cd8d2940e6bde23fba-ca-certs\") pod \"kube-apiserver-srv-it8yl.gb1.brightbox.com\" (UID: \"4023df08c7c2d7cd8d2940e6bde23fba\") " pod="kube-system/kube-apiserver-srv-it8yl.gb1.brightbox.com" May 10 02:16:00.946952 kubelet[2271]: I0510 02:16:00.946920 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4023df08c7c2d7cd8d2940e6bde23fba-k8s-certs\") pod \"kube-apiserver-srv-it8yl.gb1.brightbox.com\" (UID: \"4023df08c7c2d7cd8d2940e6bde23fba\") " pod="kube-system/kube-apiserver-srv-it8yl.gb1.brightbox.com" May 10 02:16:00.947195 kubelet[2271]: I0510 02:16:00.947130 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f39851ab8dea1dd70e96e72fe31fc98-ca-certs\") pod \"kube-controller-manager-srv-it8yl.gb1.brightbox.com\" (UID: \"3f39851ab8dea1dd70e96e72fe31fc98\") " pod="kube-system/kube-controller-manager-srv-it8yl.gb1.brightbox.com" May 10 02:16:00.947456 kubelet[2271]: I0510 02:16:00.947431 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f39851ab8dea1dd70e96e72fe31fc98-flexvolume-dir\") pod \"kube-controller-manager-srv-it8yl.gb1.brightbox.com\" (UID: \"3f39851ab8dea1dd70e96e72fe31fc98\") " pod="kube-system/kube-controller-manager-srv-it8yl.gb1.brightbox.com" May 10 02:16:00.947698 kubelet[2271]: I0510 02:16:00.947672 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f39851ab8dea1dd70e96e72fe31fc98-k8s-certs\") pod \"kube-controller-manager-srv-it8yl.gb1.brightbox.com\" (UID: \"3f39851ab8dea1dd70e96e72fe31fc98\") " pod="kube-system/kube-controller-manager-srv-it8yl.gb1.brightbox.com" May 10 02:16:00.947935 kubelet[2271]: I0510 02:16:00.947876 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3b5e1a762b0e0b1a86669c4a05c1056-kubeconfig\") pod \"kube-scheduler-srv-it8yl.gb1.brightbox.com\" (UID: \"b3b5e1a762b0e0b1a86669c4a05c1056\") " pod="kube-system/kube-scheduler-srv-it8yl.gb1.brightbox.com" May 10 02:16:00.948279 kubelet[2271]: I0510 02:16:00.948250 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4023df08c7c2d7cd8d2940e6bde23fba-usr-share-ca-certificates\") pod \"kube-apiserver-srv-it8yl.gb1.brightbox.com\" (UID: \"4023df08c7c2d7cd8d2940e6bde23fba\") " pod="kube-system/kube-apiserver-srv-it8yl.gb1.brightbox.com" May 10 02:16:00.948484 kubelet[2271]: I0510 02:16:00.948436 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f39851ab8dea1dd70e96e72fe31fc98-kubeconfig\") pod \"kube-controller-manager-srv-it8yl.gb1.brightbox.com\" (UID: \"3f39851ab8dea1dd70e96e72fe31fc98\") " pod="kube-system/kube-controller-manager-srv-it8yl.gb1.brightbox.com" May 10 02:16:00.948754 kubelet[2271]: I0510 02:16:00.948725 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f39851ab8dea1dd70e96e72fe31fc98-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-it8yl.gb1.brightbox.com\" (UID: \"3f39851ab8dea1dd70e96e72fe31fc98\") " pod="kube-system/kube-controller-manager-srv-it8yl.gb1.brightbox.com" May 10 02:16:01.386689 kubelet[2271]: I0510 02:16:01.386622 2271 apiserver.go:52] "Watching apiserver" May 10 02:16:01.441552 kubelet[2271]: I0510 02:16:01.441496 2271 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 02:16:01.615695 kubelet[2271]: W0510 02:16:01.615653 2271 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 02:16:01.616075 kubelet[2271]: E0510 02:16:01.616036 2271 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-it8yl.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-it8yl.gb1.brightbox.com" May 10 02:16:01.620143 kubelet[2271]: I0510 02:16:01.620062 2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-it8yl.gb1.brightbox.com" podStartSLOduration=5.620007454 podStartE2EDuration="5.620007454s" podCreationTimestamp="2025-05-10 02:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 02:16:01.619024012 +0000 UTC m=+1.402205543" watchObservedRunningTime="2025-05-10 02:16:01.620007454 +0000 UTC m=+1.403188976" May 10 02:16:01.647835 kubelet[2271]: I0510 02:16:01.647684 2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-it8yl.gb1.brightbox.com" podStartSLOduration=1.647662833 podStartE2EDuration="1.647662833s" podCreationTimestamp="2025-05-10 02:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 02:16:01.632603335 +0000 UTC m=+1.415784870" watchObservedRunningTime="2025-05-10 02:16:01.647662833 +0000 UTC m=+1.430844369" May 10 02:16:01.675077 kubelet[2271]: I0510 02:16:01.675011 2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-it8yl.gb1.brightbox.com" podStartSLOduration=1.674988961 podStartE2EDuration="1.674988961s" podCreationTimestamp="2025-05-10 02:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 02:16:01.650367325 +0000 UTC m=+1.433548860" watchObservedRunningTime="2025-05-10 02:16:01.674988961 +0000 UTC m=+1.458170497" May 10 02:16:06.359761 sudo[1531]: pam_unix(sudo:session): session closed for user root May 10 02:16:06.359000 audit[1531]: USER_END pid=1531 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 10 02:16:06.365946 kernel: kauditd_printk_skb: 4 callbacks suppressed May 10 02:16:06.366154 kernel: audit: type=1106 audit(1746843366.359:241): pid=1531 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 10 02:16:06.365000 audit[1531]: CRED_DISP pid=1531 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 10 02:16:06.378734 kernel: audit: type=1104 audit(1746843366.365:242): pid=1531 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 10 02:16:06.522428 sshd[1527]: pam_unix(sshd:session): session closed for user core May 10 02:16:06.524000 audit[1527]: USER_END pid=1527 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:16:06.527678 systemd[1]: sshd@8-10.230.33.70:22-139.178.68.195:54884.service: Deactivated successfully. May 10 02:16:06.529983 systemd[1]: session-9.scope: Deactivated successfully. May 10 02:16:06.532655 systemd-logind[1288]: Session 9 logged out. Waiting for processes to exit. May 10 02:16:06.534933 systemd-logind[1288]: Removed session 9. May 10 02:16:06.536676 kernel: audit: type=1106 audit(1746843366.524:243): pid=1527 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:16:06.524000 audit[1527]: CRED_DISP pid=1527 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:16:06.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.33.70:22-139.178.68.195:54884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:16:06.550014 kernel: audit: type=1104 audit(1746843366.524:244): pid=1527 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:16:06.550128 kernel: audit: type=1131 audit(1746843366.527:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.33.70:22-139.178.68.195:54884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:16:11.952829 kubelet[2271]: I0510 02:16:11.952731 2271 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 02:16:11.954399 env[1300]: time="2025-05-10T02:16:11.954343077Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 02:16:11.955247 kubelet[2271]: I0510 02:16:11.955210 2271 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 02:16:12.358944 kubelet[2271]: I0510 02:16:12.358881 2271 topology_manager.go:215] "Topology Admit Handler" podUID="12a21eb8-46e7-4972-acfa-451957edce8f" podNamespace="kube-system" podName="kube-proxy-pp6sv" May 10 02:16:12.366727 kubelet[2271]: W0510 02:16:12.366679 2271 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-it8yl.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-it8yl.gb1.brightbox.com' and this object May 10 02:16:12.367038 kubelet[2271]: E0510 02:16:12.367000 2271 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-it8yl.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-it8yl.gb1.brightbox.com' and this object May 10 02:16:12.367195 kubelet[2271]: W0510 02:16:12.366948 2271 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:srv-it8yl.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-it8yl.gb1.brightbox.com' and this object May 10 02:16:12.367365 kubelet[2271]: E0510 02:16:12.367339 2271 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:srv-it8yl.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-it8yl.gb1.brightbox.com' and this object May 10 02:16:12.425881 kubelet[2271]: I0510 02:16:12.425797 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/12a21eb8-46e7-4972-acfa-451957edce8f-kube-proxy\") pod \"kube-proxy-pp6sv\" (UID: \"12a21eb8-46e7-4972-acfa-451957edce8f\") " pod="kube-system/kube-proxy-pp6sv" May 10 02:16:12.426188 kubelet[2271]: I0510 02:16:12.426156 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12a21eb8-46e7-4972-acfa-451957edce8f-xtables-lock\") pod \"kube-proxy-pp6sv\" (UID: \"12a21eb8-46e7-4972-acfa-451957edce8f\") " pod="kube-system/kube-proxy-pp6sv" May 10 02:16:12.426361 kubelet[2271]: I0510 02:16:12.426328 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g7cv\" (UniqueName: \"kubernetes.io/projected/12a21eb8-46e7-4972-acfa-451957edce8f-kube-api-access-7g7cv\") pod \"kube-proxy-pp6sv\" (UID: \"12a21eb8-46e7-4972-acfa-451957edce8f\") " pod="kube-system/kube-proxy-pp6sv" May 10 02:16:12.426529 kubelet[2271]: I0510 02:16:12.426500 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12a21eb8-46e7-4972-acfa-451957edce8f-lib-modules\") pod \"kube-proxy-pp6sv\" (UID: \"12a21eb8-46e7-4972-acfa-451957edce8f\") " pod="kube-system/kube-proxy-pp6sv" May 10 02:16:13.102683 kubelet[2271]: I0510 02:16:13.102610 2271 topology_manager.go:215] "Topology Admit Handler" podUID="2e7ae7ab-b4e1-43a2-964e-f73a8a2624b3" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-zmr55" May 10 02:16:13.131315 kubelet[2271]: I0510 02:16:13.131268 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzk78\" (UniqueName: \"kubernetes.io/projected/2e7ae7ab-b4e1-43a2-964e-f73a8a2624b3-kube-api-access-tzk78\") pod \"tigera-operator-797db67f8-zmr55\" (UID: \"2e7ae7ab-b4e1-43a2-964e-f73a8a2624b3\") " pod="tigera-operator/tigera-operator-797db67f8-zmr55" May 10 02:16:13.131738 kubelet[2271]: I0510 02:16:13.131712 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2e7ae7ab-b4e1-43a2-964e-f73a8a2624b3-var-lib-calico\") pod \"tigera-operator-797db67f8-zmr55\" (UID: \"2e7ae7ab-b4e1-43a2-964e-f73a8a2624b3\") " pod="tigera-operator/tigera-operator-797db67f8-zmr55" May 10 02:16:13.416667 env[1300]: time="2025-05-10T02:16:13.415616021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-zmr55,Uid:2e7ae7ab-b4e1-43a2-964e-f73a8a2624b3,Namespace:tigera-operator,Attempt:0,}" May 10 02:16:13.462204 env[1300]: time="2025-05-10T02:16:13.462052717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 02:16:13.462456 env[1300]: time="2025-05-10T02:16:13.462180146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 02:16:13.462669 env[1300]: time="2025-05-10T02:16:13.462442281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 02:16:13.463164 env[1300]: time="2025-05-10T02:16:13.463067640Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/33e103c6f2798425321d1a327a801430ed07b201f2ceb86cd2d4cdf32603fb71 pid=2360 runtime=io.containerd.runc.v2 May 10 02:16:13.528078 kubelet[2271]: E0510 02:16:13.528039 2271 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition May 10 02:16:13.528538 kubelet[2271]: E0510 02:16:13.528371 2271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/12a21eb8-46e7-4972-acfa-451957edce8f-kube-proxy podName:12a21eb8-46e7-4972-acfa-451957edce8f nodeName:}" failed. No retries permitted until 2025-05-10 02:16:14.028327875 +0000 UTC m=+13.811509397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/12a21eb8-46e7-4972-acfa-451957edce8f-kube-proxy") pod "kube-proxy-pp6sv" (UID: "12a21eb8-46e7-4972-acfa-451957edce8f") : failed to sync configmap cache: timed out waiting for the condition May 10 02:16:13.569584 env[1300]: time="2025-05-10T02:16:13.569531368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-zmr55,Uid:2e7ae7ab-b4e1-43a2-964e-f73a8a2624b3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"33e103c6f2798425321d1a327a801430ed07b201f2ceb86cd2d4cdf32603fb71\"" May 10 02:16:13.575217 env[1300]: time="2025-05-10T02:16:13.575166101Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 10 02:16:14.170558 env[1300]: time="2025-05-10T02:16:14.170427167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pp6sv,Uid:12a21eb8-46e7-4972-acfa-451957edce8f,Namespace:kube-system,Attempt:0,}" May 10 02:16:14.190382 env[1300]: time="2025-05-10T02:16:14.190100893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 02:16:14.190382 env[1300]: time="2025-05-10T02:16:14.190156599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 02:16:14.190382 env[1300]: time="2025-05-10T02:16:14.190173881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 02:16:14.190873 env[1300]: time="2025-05-10T02:16:14.190432217Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e84b692427f1ce109689d21a64f0bbb7b35171bb6390e2366a8e0fb416c58fde pid=2402 runtime=io.containerd.runc.v2 May 10 02:16:14.247303 systemd[1]: run-containerd-runc-k8s.io-33e103c6f2798425321d1a327a801430ed07b201f2ceb86cd2d4cdf32603fb71-runc.1qcufh.mount: Deactivated successfully. May 10 02:16:14.267724 env[1300]: time="2025-05-10T02:16:14.267666365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pp6sv,Uid:12a21eb8-46e7-4972-acfa-451957edce8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e84b692427f1ce109689d21a64f0bbb7b35171bb6390e2366a8e0fb416c58fde\"" May 10 02:16:14.273124 env[1300]: time="2025-05-10T02:16:14.273050171Z" level=info msg="CreateContainer within sandbox \"e84b692427f1ce109689d21a64f0bbb7b35171bb6390e2366a8e0fb416c58fde\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 02:16:14.291270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1114875804.mount: Deactivated successfully. May 10 02:16:14.301658 env[1300]: time="2025-05-10T02:16:14.301447190Z" level=info msg="CreateContainer within sandbox \"e84b692427f1ce109689d21a64f0bbb7b35171bb6390e2366a8e0fb416c58fde\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"901516269e26e8a4862700ad71f51e87406f18d2f7714869016d75c9eb2ea399\"" May 10 02:16:14.303736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1071355203.mount: Deactivated successfully. May 10 02:16:14.305473 env[1300]: time="2025-05-10T02:16:14.304184605Z" level=info msg="StartContainer for \"901516269e26e8a4862700ad71f51e87406f18d2f7714869016d75c9eb2ea399\"" May 10 02:16:14.392218 env[1300]: time="2025-05-10T02:16:14.392160440Z" level=info msg="StartContainer for \"901516269e26e8a4862700ad71f51e87406f18d2f7714869016d75c9eb2ea399\" returns successfully" May 10 02:16:14.626816 kubelet[2271]: I0510 02:16:14.626705 2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pp6sv" podStartSLOduration=2.626685178 podStartE2EDuration="2.626685178s" podCreationTimestamp="2025-05-10 02:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 02:16:14.626505101 +0000 UTC m=+14.409686678" watchObservedRunningTime="2025-05-10 02:16:14.626685178 +0000 UTC m=+14.409866711" May 10 02:16:14.758000 audit[2495]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.758000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc5d1db1f0 a2=0 a3=7ffc5d1db1dc items=0 ppid=2455 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.774478 kernel: audit: type=1325 audit(1746843374.758:246): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.774684 kernel: audit: type=1300 audit(1746843374.758:246): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc5d1db1f0 a2=0 a3=7ffc5d1db1dc items=0 ppid=2455 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.774755 kernel: audit: type=1327 audit(1746843374.758:246): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 10 02:16:14.758000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 10 02:16:14.778610 kernel: audit: type=1325 audit(1746843374.765:247): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:14.765000 audit[2496]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:14.783735 kernel: audit: type=1300 audit(1746843374.765:247): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe381d3930 a2=0 a3=7ffe381d391c items=0 ppid=2455 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.765000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe381d3930 a2=0 a3=7ffe381d391c items=0 ppid=2455 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.765000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 10 02:16:14.797725 kernel: audit: type=1327 audit(1746843374.765:247): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 10 02:16:14.765000 audit[2497]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.765000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd58976610 a2=0 a3=7ffd589765fc items=0 ppid=2455 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.810088 kernel: audit: type=1325 audit(1746843374.765:248): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.810169 kernel: audit: type=1300 audit(1746843374.765:248): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd58976610 a2=0 a3=7ffd589765fc items=0 ppid=2455 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.810226 kernel: audit: type=1327 audit(1746843374.765:248): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 10 02:16:14.765000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 10 02:16:14.814105 kernel: audit: type=1325 audit(1746843374.765:249): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.765000 audit[2498]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.765000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff43d6c2e0 a2=0 a3=7fff43d6c2cc items=0 ppid=2455 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.765000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 10 02:16:14.782000 audit[2499]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:14.782000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddf80a4f0 a2=0 a3=7ffddf80a4dc items=0 ppid=2455 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.782000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 10 02:16:14.787000 audit[2500]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:14.787000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc90364160 a2=0 a3=7ffc9036414c items=0 ppid=2455 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.787000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 10 02:16:14.885000 audit[2501]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.885000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd06f8d4f0 a2=0 a3=7ffd06f8d4dc items=0 ppid=2455 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.885000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 10 02:16:14.899000 audit[2503]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.899000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcbb03c640 a2=0 a3=7ffcbb03c62c items=0 ppid=2455 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 May 10 02:16:14.908000 audit[2506]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.908000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff922a8fb0 a2=0 a3=7fff922a8f9c items=0 ppid=2455 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.908000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 May 10 02:16:14.913000 audit[2507]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.913000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff95cf7dc0 a2=0 a3=7fff95cf7dac items=0 ppid=2455 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.913000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 10 02:16:14.922000 audit[2509]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.922000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffba433d50 a2=0 a3=7fffba433d3c items=0 ppid=2455 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.922000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 10 02:16:14.925000 audit[2510]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.925000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee7da0a60 a2=0 a3=7ffee7da0a4c items=0 ppid=2455 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.925000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 10 02:16:14.931000 audit[2512]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.931000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff44e90b40 a2=0 a3=7fff44e90b2c items=0 ppid=2455 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.931000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 10 02:16:14.938000 audit[2516]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.938000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff42057ee0 a2=0 a3=7fff42057ecc items=0 ppid=2455 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.938000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 May 10 02:16:14.940000 audit[2517]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.940000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5e147780 a2=0 a3=7fff5e14776c items=0 ppid=2455 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.940000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 10 02:16:14.944000 audit[2519]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.944000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc980b46a0 a2=0 a3=7ffc980b468c items=0 ppid=2455 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.944000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 10 02:16:14.946000 audit[2520]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.946000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef45508d0 a2=0 a3=7ffef45508bc items=0 ppid=2455 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.946000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 10 02:16:14.950000 audit[2522]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2522 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.950000 audit[2522]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdd4f59e70 a2=0 a3=7ffdd4f59e5c items=0 ppid=2455 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.950000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 10 02:16:14.956000 audit[2525]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.956000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd828de7a0 a2=0 a3=7ffd828de78c items=0 ppid=2455 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.956000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 10 02:16:14.962000 audit[2528]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.962000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd03d11ed0 a2=0 a3=7ffd03d11ebc items=0 ppid=2455 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.962000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 10 02:16:14.964000 audit[2529]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.964000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff9453fd90 a2=0 a3=7fff9453fd7c items=0 ppid=2455 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.964000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 10 02:16:14.969000 audit[2531]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2531 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.969000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdb1022510 a2=0 a3=7ffdb10224fc items=0 ppid=2455 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.969000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 10 02:16:14.974000 audit[2534]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2534 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.974000 audit[2534]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd2b27d600 a2=0 a3=7ffd2b27d5ec items=0 ppid=2455 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.974000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 10 02:16:14.976000 audit[2535]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.976000 audit[2535]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc9c887b0 a2=0 a3=7fffc9c8879c items=0 ppid=2455 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.976000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 10 02:16:14.981000 audit[2537]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 10 02:16:14.981000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc6d8513e0 a2=0 a3=7ffc6d8513cc items=0 ppid=2455 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:14.981000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 10 02:16:15.020000 audit[2543]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:15.020000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe7ff93e70 a2=0 a3=7ffe7ff93e5c items=0 ppid=2455 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.020000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:15.031000 audit[2543]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:15.031000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe7ff93e70 a2=0 a3=7ffe7ff93e5c items=0 ppid=2455 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.031000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:15.034000 audit[2549]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2549 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.034000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc503f5af0 a2=0 a3=7ffc503f5adc items=0 ppid=2455 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.034000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 10 02:16:15.039000 audit[2551]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2551 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.039000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd16bcc2a0 a2=0 a3=7ffd16bcc28c items=0 ppid=2455 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.039000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 May 10 02:16:15.045000 audit[2554]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2554 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.045000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe09e6dd30 a2=0 a3=7ffe09e6dd1c items=0 ppid=2455 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.045000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 May 10 02:16:15.047000 audit[2555]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2555 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.047000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfbd1bf90 a2=0 a3=7ffcfbd1bf7c items=0 ppid=2455 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.047000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 10 02:16:15.051000 audit[2557]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.051000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcaf5219a0 a2=0 a3=7ffcaf52198c items=0 ppid=2455 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.051000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 10 02:16:15.053000 audit[2558]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.053000 audit[2558]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbb3aa8c0 a2=0 a3=7fffbb3aa8ac items=0 ppid=2455 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.053000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 10 02:16:15.058000 audit[2560]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2560 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.058000 audit[2560]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff89a41f60 a2=0 a3=7fff89a41f4c items=0 ppid=2455 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 May 10 02:16:15.065000 audit[2563]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2563 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.065000 audit[2563]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff10a222f0 a2=0 a3=7fff10a222dc items=0 ppid=2455 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.065000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 10 02:16:15.067000 audit[2564]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2564 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.067000 audit[2564]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc012f6680 a2=0 a3=7ffc012f666c items=0 ppid=2455 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.067000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 10 02:16:15.072000 audit[2566]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2566 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.072000 audit[2566]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff82957330 a2=0 a3=7fff8295731c items=0 ppid=2455 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.072000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 10 02:16:15.075000 audit[2567]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2567 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.075000 audit[2567]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb5414260 a2=0 a3=7ffeb541424c items=0 ppid=2455 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.075000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 10 02:16:15.080000 audit[2569]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2569 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.080000 audit[2569]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc70d078a0 a2=0 a3=7ffc70d0788c items=0 ppid=2455 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.080000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 10 02:16:15.087000 audit[2572]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2572 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.087000 audit[2572]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe6819ade0 a2=0 a3=7ffe6819adcc items=0 ppid=2455 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.087000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 10 02:16:15.098000 audit[2575]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2575 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.098000 audit[2575]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe4ee28b60 a2=0 a3=7ffe4ee28b4c items=0 ppid=2455 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.098000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C May 10 02:16:15.099000 audit[2576]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2576 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.099000 audit[2576]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcaa100900 a2=0 a3=7ffcaa1008ec items=0 ppid=2455 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.099000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 10 02:16:15.104000 audit[2578]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2578 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.104000 audit[2578]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff764a47e0 a2=0 a3=7fff764a47cc items=0 ppid=2455 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.104000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 10 02:16:15.110000 audit[2581]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2581 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.110000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff02ed8370 a2=0 a3=7fff02ed835c items=0 ppid=2455 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.110000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 10 02:16:15.112000 audit[2582]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2582 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.112000 audit[2582]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3c49d7e0 a2=0 a3=7fff3c49d7cc items=0 ppid=2455 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.112000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 10 02:16:15.123000 audit[2584]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2584 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.123000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffdf286bbe0 a2=0 a3=7ffdf286bbcc items=0 ppid=2455 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.123000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 10 02:16:15.125000 audit[2585]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2585 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.125000 audit[2585]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda70de0a0 a2=0 a3=7ffda70de08c items=0 ppid=2455 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.125000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 10 02:16:15.132000 audit[2587]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2587 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.132000 audit[2587]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffb010d120 a2=0 a3=7fffb010d10c items=0 ppid=2455 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.132000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 10 02:16:15.143000 audit[2590]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2590 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 10 02:16:15.143000 audit[2590]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff44b080d0 a2=0 a3=7fff44b080bc items=0 ppid=2455 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.143000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 10 02:16:15.148000 audit[2592]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2592 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 10 02:16:15.148000 audit[2592]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7fff780a20f0 a2=0 a3=7fff780a20dc items=0 ppid=2455 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.148000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:15.149000 audit[2592]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2592 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 10 02:16:15.149000 audit[2592]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff780a20f0 a2=0 a3=7fff780a20dc items=0 ppid=2455 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:15.149000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:15.556138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount741017874.mount: Deactivated successfully. May 10 02:16:16.782620 env[1300]: time="2025-05-10T02:16:16.782549933Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:16.785921 env[1300]: time="2025-05-10T02:16:16.785886514Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:16.787902 env[1300]: time="2025-05-10T02:16:16.787862214Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:16.790136 env[1300]: time="2025-05-10T02:16:16.790100132Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:16.791005 env[1300]: time="2025-05-10T02:16:16.790940723Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 10 02:16:16.799352 env[1300]: time="2025-05-10T02:16:16.799302996Z" level=info msg="CreateContainer within sandbox \"33e103c6f2798425321d1a327a801430ed07b201f2ceb86cd2d4cdf32603fb71\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 10 02:16:16.813008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700550311.mount: Deactivated successfully. May 10 02:16:16.820208 env[1300]: time="2025-05-10T02:16:16.820153918Z" level=info msg="CreateContainer within sandbox \"33e103c6f2798425321d1a327a801430ed07b201f2ceb86cd2d4cdf32603fb71\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"917a6c8721f38b2125416b741185b2f1a634337d78d8a38a54270a1420aa6085\"" May 10 02:16:16.822923 env[1300]: time="2025-05-10T02:16:16.822881737Z" level=info msg="StartContainer for \"917a6c8721f38b2125416b741185b2f1a634337d78d8a38a54270a1420aa6085\"" May 10 02:16:17.037179 env[1300]: time="2025-05-10T02:16:17.037035788Z" level=info msg="StartContainer for \"917a6c8721f38b2125416b741185b2f1a634337d78d8a38a54270a1420aa6085\" returns successfully" May 10 02:16:17.808986 systemd[1]: run-containerd-runc-k8s.io-917a6c8721f38b2125416b741185b2f1a634337d78d8a38a54270a1420aa6085-runc.JRypV8.mount: Deactivated successfully. May 10 02:16:20.060057 kernel: kauditd_printk_skb: 143 callbacks suppressed May 10 02:16:20.060278 kernel: audit: type=1325 audit(1746843380.049:297): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2633 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:20.049000 audit[2633]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2633 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:20.049000 audit[2633]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff9823c760 a2=0 a3=7fff9823c74c items=0 ppid=2455 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:20.085648 kernel: audit: type=1300 audit(1746843380.049:297): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff9823c760 a2=0 a3=7fff9823c74c items=0 ppid=2455 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:20.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:20.093667 kernel: audit: type=1327 audit(1746843380.049:297): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:20.063000 audit[2633]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2633 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:20.063000 audit[2633]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff9823c760 a2=0 a3=0 items=0 ppid=2455 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:20.106104 kernel: audit: type=1325 audit(1746843380.063:298): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2633 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:20.106204 kernel: audit: type=1300 audit(1746843380.063:298): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff9823c760 a2=0 a3=0 items=0 ppid=2455 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:20.063000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:20.110646 kernel: audit: type=1327 audit(1746843380.063:298): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:20.111000 audit[2635]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2635 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:20.111000 audit[2635]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc5e017550 a2=0 a3=7ffc5e01753c items=0 ppid=2455 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:20.124293 kernel: audit: type=1325 audit(1746843380.111:299): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2635 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:20.124370 kernel: audit: type=1300 audit(1746843380.111:299): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc5e017550 a2=0 a3=7ffc5e01753c items=0 ppid=2455 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:20.111000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:20.132665 kernel: audit: type=1327 audit(1746843380.111:299): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:20.126000 audit[2635]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2635 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:20.148658 kernel: audit: type=1325 audit(1746843380.126:300): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2635 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:20.126000 audit[2635]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc5e017550 a2=0 a3=0 items=0 ppid=2455 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:20.126000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:20.397210 kubelet[2271]: I0510 02:16:20.397001 2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-zmr55" podStartSLOduration=4.176305834 podStartE2EDuration="7.396934506s" podCreationTimestamp="2025-05-10 02:16:13 +0000 UTC" firstStartedPulling="2025-05-10 02:16:13.572424531 +0000 UTC m=+13.355606053" lastFinishedPulling="2025-05-10 02:16:16.793053209 +0000 UTC m=+16.576234725" observedRunningTime="2025-05-10 02:16:17.638974314 +0000 UTC m=+17.422155850" watchObservedRunningTime="2025-05-10 02:16:20.396934506 +0000 UTC m=+20.180116028" May 10 02:16:20.398591 kubelet[2271]: I0510 02:16:20.398537 2271 topology_manager.go:215] "Topology Admit Handler" podUID="56dada48-8879-40a3-af1d-c6ab39542dfa" podNamespace="calico-system" podName="calico-typha-868b89495c-q888j" May 10 02:16:20.492333 kubelet[2271]: I0510 02:16:20.492265 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/56dada48-8879-40a3-af1d-c6ab39542dfa-typha-certs\") pod \"calico-typha-868b89495c-q888j\" (UID: \"56dada48-8879-40a3-af1d-c6ab39542dfa\") " pod="calico-system/calico-typha-868b89495c-q888j" May 10 02:16:20.492560 kubelet[2271]: I0510 02:16:20.492339 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56dada48-8879-40a3-af1d-c6ab39542dfa-tigera-ca-bundle\") pod \"calico-typha-868b89495c-q888j\" (UID: \"56dada48-8879-40a3-af1d-c6ab39542dfa\") " pod="calico-system/calico-typha-868b89495c-q888j" May 10 02:16:20.492560 kubelet[2271]: I0510 02:16:20.492375 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfhtq\" (UniqueName: \"kubernetes.io/projected/56dada48-8879-40a3-af1d-c6ab39542dfa-kube-api-access-nfhtq\") pod \"calico-typha-868b89495c-q888j\" (UID: \"56dada48-8879-40a3-af1d-c6ab39542dfa\") " pod="calico-system/calico-typha-868b89495c-q888j" May 10 02:16:20.547391 kubelet[2271]: I0510 02:16:20.547324 2271 topology_manager.go:215] "Topology Admit Handler" podUID="a4e48408-11ae-41e1-a9df-671daf11213c" podNamespace="calico-system" podName="calico-node-ctnlg" May 10 02:16:20.592883 kubelet[2271]: I0510 02:16:20.592807 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4e48408-11ae-41e1-a9df-671daf11213c-tigera-ca-bundle\") pod \"calico-node-ctnlg\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " pod="calico-system/calico-node-ctnlg" May 10 02:16:20.593095 kubelet[2271]: I0510 02:16:20.592939 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-var-lib-calico\") pod \"calico-node-ctnlg\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " pod="calico-system/calico-node-ctnlg" May 10 02:16:20.593095 kubelet[2271]: I0510 02:16:20.592972 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-cni-bin-dir\") pod \"calico-node-ctnlg\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " pod="calico-system/calico-node-ctnlg" May 10 02:16:20.593095 kubelet[2271]: I0510 02:16:20.593014 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-cni-log-dir\") pod \"calico-node-ctnlg\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " pod="calico-system/calico-node-ctnlg" May 10 02:16:20.593095 kubelet[2271]: I0510 02:16:20.593065 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-var-run-calico\") pod \"calico-node-ctnlg\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " pod="calico-system/calico-node-ctnlg" May 10 02:16:20.593095 kubelet[2271]: I0510 02:16:20.593093 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-cni-net-dir\") pod \"calico-node-ctnlg\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " pod="calico-system/calico-node-ctnlg" May 10 02:16:20.593377 kubelet[2271]: I0510 02:16:20.593119 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-lib-modules\") pod \"calico-node-ctnlg\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " pod="calico-system/calico-node-ctnlg" May 10 02:16:20.593377 kubelet[2271]: I0510 02:16:20.593147 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-xtables-lock\") pod \"calico-node-ctnlg\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " pod="calico-system/calico-node-ctnlg" May 10 02:16:20.593377 kubelet[2271]: I0510 02:16:20.593182 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-policysync\") pod \"calico-node-ctnlg\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " pod="calico-system/calico-node-ctnlg" May 10 02:16:20.593377 kubelet[2271]: I0510 02:16:20.593216 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a4e48408-11ae-41e1-a9df-671daf11213c-node-certs\") pod \"calico-node-ctnlg\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " pod="calico-system/calico-node-ctnlg" May 10 02:16:20.593377 kubelet[2271]: I0510 02:16:20.593246 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcdfc\" (UniqueName: \"kubernetes.io/projected/a4e48408-11ae-41e1-a9df-671daf11213c-kube-api-access-dcdfc\") pod \"calico-node-ctnlg\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " pod="calico-system/calico-node-ctnlg" May 10 02:16:20.593693 kubelet[2271]: I0510 02:16:20.593300 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-flexvol-driver-host\") pod \"calico-node-ctnlg\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " pod="calico-system/calico-node-ctnlg" May 10 02:16:20.698687 kubelet[2271]: E0510 02:16:20.698552 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.698687 kubelet[2271]: W0510 02:16:20.698579 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.703927 kubelet[2271]: E0510 02:16:20.698638 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.711411 kubelet[2271]: E0510 02:16:20.711237 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.711411 kubelet[2271]: W0510 02:16:20.711279 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.711411 kubelet[2271]: E0510 02:16:20.711310 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.711627 kubelet[2271]: E0510 02:16:20.711548 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.711627 kubelet[2271]: W0510 02:16:20.711574 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.711627 kubelet[2271]: E0510 02:16:20.711588 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.711823 kubelet[2271]: E0510 02:16:20.711812 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.711927 kubelet[2271]: W0510 02:16:20.711824 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.711927 kubelet[2271]: E0510 02:16:20.711858 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.712483 kubelet[2271]: E0510 02:16:20.712212 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.712483 kubelet[2271]: W0510 02:16:20.712230 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.712483 kubelet[2271]: E0510 02:16:20.712454 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.712483 kubelet[2271]: W0510 02:16:20.712467 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.716689 env[1300]: time="2025-05-10T02:16:20.715915616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-868b89495c-q888j,Uid:56dada48-8879-40a3-af1d-c6ab39542dfa,Namespace:calico-system,Attempt:0,}" May 10 02:16:20.721385 kubelet[2271]: E0510 02:16:20.720252 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.721385 kubelet[2271]: W0510 02:16:20.720284 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.721385 kubelet[2271]: E0510 02:16:20.721215 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.721385 kubelet[2271]: E0510 02:16:20.721245 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.721385 kubelet[2271]: E0510 02:16:20.721292 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.737274 kubelet[2271]: E0510 02:16:20.736475 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.737274 kubelet[2271]: W0510 02:16:20.736503 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.737274 kubelet[2271]: E0510 02:16:20.736741 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.751714 kubelet[2271]: E0510 02:16:20.751329 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.751714 kubelet[2271]: W0510 02:16:20.751389 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.751714 kubelet[2271]: E0510 02:16:20.751488 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.762290 kubelet[2271]: E0510 02:16:20.757780 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.762290 kubelet[2271]: W0510 02:16:20.757797 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.762290 kubelet[2271]: E0510 02:16:20.757949 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.762290 kubelet[2271]: E0510 02:16:20.758117 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.762290 kubelet[2271]: W0510 02:16:20.758131 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.762290 kubelet[2271]: E0510 02:16:20.758249 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.762290 kubelet[2271]: E0510 02:16:20.758436 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.762290 kubelet[2271]: W0510 02:16:20.758450 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.762290 kubelet[2271]: E0510 02:16:20.758568 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.762290 kubelet[2271]: E0510 02:16:20.758842 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.762910 kubelet[2271]: W0510 02:16:20.758856 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.762910 kubelet[2271]: E0510 02:16:20.758904 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.762910 kubelet[2271]: E0510 02:16:20.759229 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.762910 kubelet[2271]: W0510 02:16:20.759243 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.762910 kubelet[2271]: E0510 02:16:20.759283 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.762910 kubelet[2271]: E0510 02:16:20.759606 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.762910 kubelet[2271]: W0510 02:16:20.759620 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.762910 kubelet[2271]: E0510 02:16:20.759752 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.762910 kubelet[2271]: E0510 02:16:20.759959 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.762910 kubelet[2271]: W0510 02:16:20.759973 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.763396 kubelet[2271]: E0510 02:16:20.760124 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.763396 kubelet[2271]: E0510 02:16:20.760354 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.763396 kubelet[2271]: W0510 02:16:20.760368 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.763396 kubelet[2271]: E0510 02:16:20.760389 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.786683 kubelet[2271]: E0510 02:16:20.782973 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.786683 kubelet[2271]: W0510 02:16:20.783004 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.786683 kubelet[2271]: E0510 02:16:20.783044 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.844716 env[1300]: time="2025-05-10T02:16:20.844219332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 02:16:20.844716 env[1300]: time="2025-05-10T02:16:20.844331829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 02:16:20.844716 env[1300]: time="2025-05-10T02:16:20.844349474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 02:16:20.845304 env[1300]: time="2025-05-10T02:16:20.845159640Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2 pid=2665 runtime=io.containerd.runc.v2 May 10 02:16:20.862600 kubelet[2271]: I0510 02:16:20.857276 2271 topology_manager.go:215] "Topology Admit Handler" podUID="b62d1657-f00b-4957-b89b-113ee88c8696" podNamespace="calico-system" podName="csi-node-driver-msjzq" May 10 02:16:20.862600 kubelet[2271]: E0510 02:16:20.857668 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msjzq" podUID="b62d1657-f00b-4957-b89b-113ee88c8696" May 10 02:16:20.869019 env[1300]: time="2025-05-10T02:16:20.868968022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ctnlg,Uid:a4e48408-11ae-41e1-a9df-671daf11213c,Namespace:calico-system,Attempt:0,}" May 10 02:16:20.876829 kubelet[2271]: E0510 02:16:20.876799 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.876829 kubelet[2271]: W0510 02:16:20.876825 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.877015 kubelet[2271]: E0510 02:16:20.876858 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.877786 kubelet[2271]: E0510 02:16:20.877761 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.877897 kubelet[2271]: W0510 02:16:20.877798 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.877897 kubelet[2271]: E0510 02:16:20.877819 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.881089 kubelet[2271]: E0510 02:16:20.880900 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.881089 kubelet[2271]: W0510 02:16:20.880922 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.881089 kubelet[2271]: E0510 02:16:20.880939 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.886578 kubelet[2271]: E0510 02:16:20.886549 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.886578 kubelet[2271]: W0510 02:16:20.886572 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.886764 kubelet[2271]: E0510 02:16:20.886590 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.887218 kubelet[2271]: E0510 02:16:20.887181 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.887303 kubelet[2271]: W0510 02:16:20.887223 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.887303 kubelet[2271]: E0510 02:16:20.887246 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.890914 kubelet[2271]: E0510 02:16:20.888017 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.890914 kubelet[2271]: W0510 02:16:20.888053 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.890914 kubelet[2271]: E0510 02:16:20.888074 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.890914 kubelet[2271]: E0510 02:16:20.888856 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.890914 kubelet[2271]: W0510 02:16:20.888908 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.890914 kubelet[2271]: E0510 02:16:20.888926 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.898673 kubelet[2271]: E0510 02:16:20.895382 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.898673 kubelet[2271]: W0510 02:16:20.895426 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.898673 kubelet[2271]: E0510 02:16:20.895459 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.900649 kubelet[2271]: E0510 02:16:20.900605 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.901050 kubelet[2271]: W0510 02:16:20.900653 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.901050 kubelet[2271]: E0510 02:16:20.900703 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.911597 kubelet[2271]: E0510 02:16:20.911522 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.911775 kubelet[2271]: W0510 02:16:20.911589 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.911775 kubelet[2271]: E0510 02:16:20.911649 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.920960 kubelet[2271]: E0510 02:16:20.920925 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.921441 kubelet[2271]: W0510 02:16:20.920968 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.921441 kubelet[2271]: E0510 02:16:20.920996 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.921950 kubelet[2271]: E0510 02:16:20.921926 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.921950 kubelet[2271]: W0510 02:16:20.921947 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.922094 kubelet[2271]: E0510 02:16:20.921965 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.922380 kubelet[2271]: E0510 02:16:20.922354 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.922380 kubelet[2271]: W0510 02:16:20.922376 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.922539 kubelet[2271]: E0510 02:16:20.922393 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.922708 kubelet[2271]: E0510 02:16:20.922685 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.922708 kubelet[2271]: W0510 02:16:20.922705 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.922851 kubelet[2271]: E0510 02:16:20.922724 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.923003 kubelet[2271]: E0510 02:16:20.922980 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.923003 kubelet[2271]: W0510 02:16:20.923000 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.923003 kubelet[2271]: E0510 02:16:20.923016 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.923318 kubelet[2271]: E0510 02:16:20.923278 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.923318 kubelet[2271]: W0510 02:16:20.923299 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.923318 kubelet[2271]: E0510 02:16:20.923314 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.928923 kubelet[2271]: E0510 02:16:20.928893 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.928923 kubelet[2271]: W0510 02:16:20.928918 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.929073 kubelet[2271]: E0510 02:16:20.928941 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.929212 kubelet[2271]: E0510 02:16:20.929186 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.929212 kubelet[2271]: W0510 02:16:20.929208 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.929386 kubelet[2271]: E0510 02:16:20.929225 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.929519 kubelet[2271]: E0510 02:16:20.929487 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.929519 kubelet[2271]: W0510 02:16:20.929507 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.929519 kubelet[2271]: E0510 02:16:20.929521 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.929792 kubelet[2271]: E0510 02:16:20.929767 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.929792 kubelet[2271]: W0510 02:16:20.929787 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.929949 kubelet[2271]: E0510 02:16:20.929802 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.930211 kubelet[2271]: E0510 02:16:20.930165 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.930211 kubelet[2271]: W0510 02:16:20.930187 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.930211 kubelet[2271]: E0510 02:16:20.930203 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.930442 kubelet[2271]: I0510 02:16:20.930243 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b62d1657-f00b-4957-b89b-113ee88c8696-varrun\") pod \"csi-node-driver-msjzq\" (UID: \"b62d1657-f00b-4957-b89b-113ee88c8696\") " pod="calico-system/csi-node-driver-msjzq" May 10 02:16:20.941218 kubelet[2271]: E0510 02:16:20.941183 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.941218 kubelet[2271]: W0510 02:16:20.941211 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.941406 kubelet[2271]: E0510 02:16:20.941240 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.941406 kubelet[2271]: I0510 02:16:20.941270 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn6sm\" (UniqueName: \"kubernetes.io/projected/b62d1657-f00b-4957-b89b-113ee88c8696-kube-api-access-hn6sm\") pod \"csi-node-driver-msjzq\" (UID: \"b62d1657-f00b-4957-b89b-113ee88c8696\") " pod="calico-system/csi-node-driver-msjzq" May 10 02:16:20.942058 kubelet[2271]: E0510 02:16:20.942031 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.942058 kubelet[2271]: W0510 02:16:20.942055 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.942315 kubelet[2271]: E0510 02:16:20.942228 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.942315 kubelet[2271]: I0510 02:16:20.942272 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b62d1657-f00b-4957-b89b-113ee88c8696-socket-dir\") pod \"csi-node-driver-msjzq\" (UID: \"b62d1657-f00b-4957-b89b-113ee88c8696\") " pod="calico-system/csi-node-driver-msjzq" May 10 02:16:20.942468 kubelet[2271]: E0510 02:16:20.942337 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.942468 kubelet[2271]: W0510 02:16:20.942352 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.942579 kubelet[2271]: E0510 02:16:20.942475 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.942745 kubelet[2271]: E0510 02:16:20.942699 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.942745 kubelet[2271]: W0510 02:16:20.942741 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.942943 kubelet[2271]: E0510 02:16:20.942764 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.943072 kubelet[2271]: E0510 02:16:20.943050 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.943072 kubelet[2271]: W0510 02:16:20.943069 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.943234 kubelet[2271]: E0510 02:16:20.943092 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.943234 kubelet[2271]: I0510 02:16:20.943118 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b62d1657-f00b-4957-b89b-113ee88c8696-registration-dir\") pod \"csi-node-driver-msjzq\" (UID: \"b62d1657-f00b-4957-b89b-113ee88c8696\") " pod="calico-system/csi-node-driver-msjzq" May 10 02:16:20.943469 kubelet[2271]: E0510 02:16:20.943405 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.943469 kubelet[2271]: W0510 02:16:20.943426 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.943469 kubelet[2271]: E0510 02:16:20.943449 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.943745 kubelet[2271]: E0510 02:16:20.943723 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.943745 kubelet[2271]: W0510 02:16:20.943741 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.943911 kubelet[2271]: E0510 02:16:20.943768 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.954939 kubelet[2271]: E0510 02:16:20.951321 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.954939 kubelet[2271]: W0510 02:16:20.951343 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.954939 kubelet[2271]: E0510 02:16:20.951373 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.959227 kubelet[2271]: E0510 02:16:20.959193 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.959227 kubelet[2271]: W0510 02:16:20.959217 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.959500 kubelet[2271]: E0510 02:16:20.959250 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.959500 kubelet[2271]: I0510 02:16:20.959284 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b62d1657-f00b-4957-b89b-113ee88c8696-kubelet-dir\") pod \"csi-node-driver-msjzq\" (UID: \"b62d1657-f00b-4957-b89b-113ee88c8696\") " pod="calico-system/csi-node-driver-msjzq" May 10 02:16:20.964265 kubelet[2271]: E0510 02:16:20.964222 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.964265 kubelet[2271]: W0510 02:16:20.964260 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.964457 kubelet[2271]: E0510 02:16:20.964281 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.964976 kubelet[2271]: E0510 02:16:20.964726 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.964976 kubelet[2271]: W0510 02:16:20.964975 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.965119 kubelet[2271]: E0510 02:16:20.964995 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.967515 kubelet[2271]: E0510 02:16:20.966758 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.967515 kubelet[2271]: W0510 02:16:20.966794 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.967515 kubelet[2271]: E0510 02:16:20.966821 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.967515 kubelet[2271]: E0510 02:16:20.967141 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.967515 kubelet[2271]: W0510 02:16:20.967156 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.967515 kubelet[2271]: E0510 02:16:20.967172 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.967515 kubelet[2271]: E0510 02:16:20.967428 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:20.967515 kubelet[2271]: W0510 02:16:20.967442 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:20.967515 kubelet[2271]: E0510 02:16:20.967456 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:20.999040 env[1300]: time="2025-05-10T02:16:20.996436286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 02:16:20.999253 env[1300]: time="2025-05-10T02:16:20.999071231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 02:16:20.999253 env[1300]: time="2025-05-10T02:16:20.999153707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 02:16:21.006676 env[1300]: time="2025-05-10T02:16:21.005446336Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3 pid=2745 runtime=io.containerd.runc.v2 May 10 02:16:21.071335 kubelet[2271]: E0510 02:16:21.071094 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.071335 kubelet[2271]: W0510 02:16:21.071122 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.071335 kubelet[2271]: E0510 02:16:21.071151 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.072071 kubelet[2271]: E0510 02:16:21.071774 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.072071 kubelet[2271]: W0510 02:16:21.071810 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.072071 kubelet[2271]: E0510 02:16:21.071836 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.072676 kubelet[2271]: E0510 02:16:21.072372 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.072676 kubelet[2271]: W0510 02:16:21.072408 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.072676 kubelet[2271]: E0510 02:16:21.072435 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.076821 kubelet[2271]: E0510 02:16:21.076787 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.077024 kubelet[2271]: W0510 02:16:21.076997 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.077649 kubelet[2271]: E0510 02:16:21.077611 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.077797 kubelet[2271]: W0510 02:16:21.077770 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.078250 kubelet[2271]: E0510 02:16:21.078230 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.078398 kubelet[2271]: W0510 02:16:21.078360 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.078850 kubelet[2271]: E0510 02:16:21.078828 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.079002 kubelet[2271]: W0510 02:16:21.078976 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.079161 kubelet[2271]: E0510 02:16:21.079137 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.079578 kubelet[2271]: E0510 02:16:21.079557 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.079763 kubelet[2271]: W0510 02:16:21.079733 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.079931 kubelet[2271]: E0510 02:16:21.079906 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.080360 kubelet[2271]: E0510 02:16:21.080332 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.080485 kubelet[2271]: W0510 02:16:21.080457 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.080684 kubelet[2271]: E0510 02:16:21.080640 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.082683 kubelet[2271]: E0510 02:16:21.082176 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.082683 kubelet[2271]: E0510 02:16:21.082459 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.082683 kubelet[2271]: W0510 02:16:21.082475 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.082683 kubelet[2271]: E0510 02:16:21.082503 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.083135 kubelet[2271]: E0510 02:16:21.083109 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.083838 kubelet[2271]: E0510 02:16:21.083815 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.090693 kubelet[2271]: E0510 02:16:21.088893 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.090693 kubelet[2271]: W0510 02:16:21.088915 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.090693 kubelet[2271]: E0510 02:16:21.088949 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.090693 kubelet[2271]: E0510 02:16:21.089331 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.090693 kubelet[2271]: W0510 02:16:21.089354 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.090693 kubelet[2271]: E0510 02:16:21.089505 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.090693 kubelet[2271]: E0510 02:16:21.089703 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.090693 kubelet[2271]: W0510 02:16:21.089717 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.090693 kubelet[2271]: E0510 02:16:21.089840 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.090693 kubelet[2271]: E0510 02:16:21.090026 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.091299 kubelet[2271]: W0510 02:16:21.090041 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.091299 kubelet[2271]: E0510 02:16:21.090213 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.091299 kubelet[2271]: E0510 02:16:21.090392 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.091299 kubelet[2271]: W0510 02:16:21.090406 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.091299 kubelet[2271]: E0510 02:16:21.090511 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.091299 kubelet[2271]: E0510 02:16:21.090779 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.091299 kubelet[2271]: W0510 02:16:21.090796 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.091299 kubelet[2271]: E0510 02:16:21.090920 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.091299 kubelet[2271]: E0510 02:16:21.091074 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.091299 kubelet[2271]: W0510 02:16:21.091087 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.091832 kubelet[2271]: E0510 02:16:21.091108 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.091832 kubelet[2271]: E0510 02:16:21.091389 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.091832 kubelet[2271]: W0510 02:16:21.091404 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.091832 kubelet[2271]: E0510 02:16:21.091425 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.093689 kubelet[2271]: E0510 02:16:21.093400 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.093689 kubelet[2271]: W0510 02:16:21.093424 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.093689 kubelet[2271]: E0510 02:16:21.093464 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.093924 kubelet[2271]: E0510 02:16:21.093853 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.093924 kubelet[2271]: W0510 02:16:21.093881 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.094044 kubelet[2271]: E0510 02:16:21.093996 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.094665 kubelet[2271]: E0510 02:16:21.094164 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.094665 kubelet[2271]: W0510 02:16:21.094183 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.094665 kubelet[2271]: E0510 02:16:21.094218 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.094852 kubelet[2271]: E0510 02:16:21.094802 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.094852 kubelet[2271]: W0510 02:16:21.094816 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.094852 kubelet[2271]: E0510 02:16:21.094849 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.103651 kubelet[2271]: E0510 02:16:21.103601 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.103651 kubelet[2271]: W0510 02:16:21.103640 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.105336 kubelet[2271]: E0510 02:16:21.105309 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.105336 kubelet[2271]: W0510 02:16:21.105332 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.105486 kubelet[2271]: E0510 02:16:21.105352 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.105730 kubelet[2271]: E0510 02:16:21.105706 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.105730 kubelet[2271]: W0510 02:16:21.105726 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.105918 kubelet[2271]: E0510 02:16:21.105742 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.105918 kubelet[2271]: E0510 02:16:21.105771 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.124686 kubelet[2271]: E0510 02:16:21.121902 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:21.124686 kubelet[2271]: W0510 02:16:21.121926 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:21.124686 kubelet[2271]: E0510 02:16:21.121946 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:21.139094 env[1300]: time="2025-05-10T02:16:21.139039186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-868b89495c-q888j,Uid:56dada48-8879-40a3-af1d-c6ab39542dfa,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2\"" May 10 02:16:21.141268 env[1300]: time="2025-05-10T02:16:21.141227268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 10 02:16:21.164000 audit[2810]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2810 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:21.164000 audit[2810]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7ffead377700 a2=0 a3=7ffead3776ec items=0 ppid=2455 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:21.164000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:21.168000 audit[2810]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2810 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:21.168000 audit[2810]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffead377700 a2=0 a3=0 items=0 ppid=2455 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:21.168000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:21.197391 env[1300]: time="2025-05-10T02:16:21.194598774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ctnlg,Uid:a4e48408-11ae-41e1-a9df-671daf11213c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3\"" May 10 02:16:22.517589 kubelet[2271]: E0510 02:16:22.517525 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msjzq" podUID="b62d1657-f00b-4957-b89b-113ee88c8696" May 10 02:16:24.519058 kubelet[2271]: E0510 02:16:24.518981 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msjzq" podUID="b62d1657-f00b-4957-b89b-113ee88c8696" May 10 02:16:24.969117 env[1300]: time="2025-05-10T02:16:24.969054346Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:24.972831 env[1300]: time="2025-05-10T02:16:24.972786329Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:24.976312 env[1300]: time="2025-05-10T02:16:24.976270165Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:24.979242 env[1300]: time="2025-05-10T02:16:24.979207403Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:24.980374 env[1300]: time="2025-05-10T02:16:24.980328516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 10 02:16:24.993491 env[1300]: time="2025-05-10T02:16:24.993435204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 10 02:16:25.011131 env[1300]: time="2025-05-10T02:16:25.011091793Z" level=info msg="CreateContainer within sandbox \"3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 10 02:16:25.029615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2534813880.mount: Deactivated successfully. May 10 02:16:25.033958 env[1300]: time="2025-05-10T02:16:25.033840918Z" level=info msg="CreateContainer within sandbox \"3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9995f89b5a405fefff0e961c6d464c4d0af7232fb574d39abcbd915393645eff\"" May 10 02:16:25.036911 env[1300]: time="2025-05-10T02:16:25.036864497Z" level=info msg="StartContainer for \"9995f89b5a405fefff0e961c6d464c4d0af7232fb574d39abcbd915393645eff\"" May 10 02:16:25.155386 env[1300]: time="2025-05-10T02:16:25.155331564Z" level=info msg="StartContainer for \"9995f89b5a405fefff0e961c6d464c4d0af7232fb574d39abcbd915393645eff\" returns successfully" May 10 02:16:25.666715 kubelet[2271]: E0510 02:16:25.666100 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.666715 kubelet[2271]: W0510 02:16:25.666135 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.666715 kubelet[2271]: E0510 02:16:25.666165 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.666715 kubelet[2271]: E0510 02:16:25.666439 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.666715 kubelet[2271]: W0510 02:16:25.666453 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.666715 kubelet[2271]: E0510 02:16:25.666468 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.668220 kubelet[2271]: E0510 02:16:25.668094 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.668220 kubelet[2271]: W0510 02:16:25.668121 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.668220 kubelet[2271]: E0510 02:16:25.668151 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.668748 kubelet[2271]: E0510 02:16:25.668702 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.668870 kubelet[2271]: W0510 02:16:25.668759 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.668870 kubelet[2271]: E0510 02:16:25.668780 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.669286 kubelet[2271]: E0510 02:16:25.669208 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.669286 kubelet[2271]: W0510 02:16:25.669276 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.669422 kubelet[2271]: E0510 02:16:25.669300 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.669823 kubelet[2271]: E0510 02:16:25.669787 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.669823 kubelet[2271]: W0510 02:16:25.669810 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.670026 kubelet[2271]: E0510 02:16:25.669836 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.670446 kubelet[2271]: E0510 02:16:25.670414 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.670858 kubelet[2271]: W0510 02:16:25.670453 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.670858 kubelet[2271]: E0510 02:16:25.670471 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.670858 kubelet[2271]: E0510 02:16:25.670816 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.670858 kubelet[2271]: W0510 02:16:25.670830 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.671107 kubelet[2271]: E0510 02:16:25.670859 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.671378 kubelet[2271]: E0510 02:16:25.671340 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.671378 kubelet[2271]: W0510 02:16:25.671363 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.671899 kubelet[2271]: E0510 02:16:25.671382 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.671899 kubelet[2271]: E0510 02:16:25.671687 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.671899 kubelet[2271]: W0510 02:16:25.671702 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.671899 kubelet[2271]: E0510 02:16:25.671718 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.673296 kubelet[2271]: E0510 02:16:25.672744 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.673296 kubelet[2271]: W0510 02:16:25.672765 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.673296 kubelet[2271]: E0510 02:16:25.672910 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.673296 kubelet[2271]: E0510 02:16:25.673153 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.673296 kubelet[2271]: W0510 02:16:25.673167 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.673296 kubelet[2271]: E0510 02:16:25.673184 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.674502 kubelet[2271]: E0510 02:16:25.674105 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.674502 kubelet[2271]: W0510 02:16:25.674124 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.674502 kubelet[2271]: E0510 02:16:25.674142 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.674502 kubelet[2271]: E0510 02:16:25.674377 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.674502 kubelet[2271]: W0510 02:16:25.674390 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.674502 kubelet[2271]: E0510 02:16:25.674406 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.675136 kubelet[2271]: E0510 02:16:25.675009 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.675136 kubelet[2271]: W0510 02:16:25.675041 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.675136 kubelet[2271]: E0510 02:16:25.675058 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.686613 kubelet[2271]: I0510 02:16:25.686527 2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-868b89495c-q888j" podStartSLOduration=1.8449981009999998 podStartE2EDuration="5.686506004s" podCreationTimestamp="2025-05-10 02:16:20 +0000 UTC" firstStartedPulling="2025-05-10 02:16:21.14088607 +0000 UTC m=+20.924067592" lastFinishedPulling="2025-05-10 02:16:24.98239397 +0000 UTC m=+24.765575495" observedRunningTime="2025-05-10 02:16:25.662575183 +0000 UTC m=+25.445756717" watchObservedRunningTime="2025-05-10 02:16:25.686506004 +0000 UTC m=+25.469687529" May 10 02:16:25.728108 kernel: kauditd_printk_skb: 8 callbacks suppressed May 10 02:16:25.728393 kernel: audit: type=1325 audit(1746843385.717:303): table=filter:95 family=2 entries=17 op=nft_register_rule pid=2880 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:25.717000 audit[2880]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=2880 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:25.717000 audit[2880]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff30954680 a2=0 a3=7fff3095466c items=0 ppid=2455 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:25.738606 kubelet[2271]: E0510 02:16:25.733236 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.738606 kubelet[2271]: W0510 02:16:25.733297 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.738606 kubelet[2271]: E0510 02:16:25.733365 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.738919 kernel: audit: type=1300 audit(1746843385.717:303): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff30954680 a2=0 a3=7fff3095466c items=0 ppid=2455 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:25.741024 kubelet[2271]: E0510 02:16:25.740990 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.741024 kubelet[2271]: W0510 02:16:25.741017 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.741182 kubelet[2271]: E0510 02:16:25.741038 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.742571 kubelet[2271]: E0510 02:16:25.742533 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.742571 kubelet[2271]: W0510 02:16:25.742555 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.742759 kubelet[2271]: E0510 02:16:25.742656 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.744788 kubelet[2271]: E0510 02:16:25.744745 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.744788 kubelet[2271]: W0510 02:16:25.744774 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.744988 kubelet[2271]: E0510 02:16:25.744879 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.747006 kubelet[2271]: E0510 02:16:25.746956 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.747006 kubelet[2271]: W0510 02:16:25.747004 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.747396 kubelet[2271]: E0510 02:16:25.747179 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.747538 kubelet[2271]: E0510 02:16:25.747424 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.747538 kubelet[2271]: W0510 02:16:25.747458 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.717000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:25.748097 kubelet[2271]: E0510 02:16:25.747867 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.751609 kubelet[2271]: E0510 02:16:25.751584 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.751609 kubelet[2271]: W0510 02:16:25.751606 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.751863 kernel: audit: type=1327 audit(1746843385.717:303): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:25.751923 kernel: audit: type=1325 audit(1746843385.745:304): table=nat:96 family=2 entries=19 op=nft_register_chain pid=2880 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:25.745000 audit[2880]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=2880 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:25.752071 kubelet[2271]: E0510 02:16:25.751787 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.756118 kubelet[2271]: E0510 02:16:25.756084 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.756118 kubelet[2271]: W0510 02:16:25.756106 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.756301 kubelet[2271]: E0510 02:16:25.756271 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.761672 kernel: audit: type=1300 audit(1746843385.745:304): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff30954680 a2=0 a3=7fff3095466c items=0 ppid=2455 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:25.745000 audit[2880]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff30954680 a2=0 a3=7fff3095466c items=0 ppid=2455 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:25.761992 kubelet[2271]: E0510 02:16:25.756829 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.761992 kubelet[2271]: W0510 02:16:25.756843 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.761992 kubelet[2271]: E0510 02:16:25.756986 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.761992 kubelet[2271]: E0510 02:16:25.757180 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.761992 kubelet[2271]: W0510 02:16:25.757193 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.761992 kubelet[2271]: E0510 02:16:25.757292 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.761992 kubelet[2271]: E0510 02:16:25.757524 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.761992 kubelet[2271]: W0510 02:16:25.757538 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.761992 kubelet[2271]: E0510 02:16:25.757564 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.761992 kubelet[2271]: E0510 02:16:25.757905 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.762557 kubelet[2271]: W0510 02:16:25.757920 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.762557 kubelet[2271]: E0510 02:16:25.757943 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.762557 kubelet[2271]: E0510 02:16:25.758650 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.762557 kubelet[2271]: W0510 02:16:25.758665 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.762557 kubelet[2271]: E0510 02:16:25.758780 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.762557 kubelet[2271]: E0510 02:16:25.759024 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.762557 kubelet[2271]: W0510 02:16:25.759038 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.762557 kubelet[2271]: E0510 02:16:25.759147 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.762557 kubelet[2271]: E0510 02:16:25.759330 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.762557 kubelet[2271]: W0510 02:16:25.759342 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.763076 kubelet[2271]: E0510 02:16:25.759361 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.763076 kubelet[2271]: E0510 02:16:25.759891 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.763076 kubelet[2271]: W0510 02:16:25.759905 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.763076 kubelet[2271]: E0510 02:16:25.759927 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.763076 kubelet[2271]: E0510 02:16:25.760880 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.763076 kubelet[2271]: W0510 02:16:25.760895 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.763076 kubelet[2271]: E0510 02:16:25.760946 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.763076 kubelet[2271]: E0510 02:16:25.761232 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:25.763076 kubelet[2271]: W0510 02:16:25.761246 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:25.763076 kubelet[2271]: E0510 02:16:25.761261 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:25.745000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:25.768836 kernel: audit: type=1327 audit(1746843385.745:304): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:26.517285 kubelet[2271]: E0510 02:16:26.517182 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msjzq" podUID="b62d1657-f00b-4957-b89b-113ee88c8696" May 10 02:16:26.682233 kubelet[2271]: E0510 02:16:26.682027 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.682233 kubelet[2271]: W0510 02:16:26.682061 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.682233 kubelet[2271]: E0510 02:16:26.682088 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.683539 kubelet[2271]: E0510 02:16:26.683451 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.683539 kubelet[2271]: W0510 02:16:26.683478 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.683539 kubelet[2271]: E0510 02:16:26.683496 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.684238 kubelet[2271]: E0510 02:16:26.684067 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.684238 kubelet[2271]: W0510 02:16:26.684087 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.684238 kubelet[2271]: E0510 02:16:26.684104 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.756934 kubelet[2271]: E0510 02:16:26.684420 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.756934 kubelet[2271]: W0510 02:16:26.684450 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.756934 kubelet[2271]: E0510 02:16:26.684465 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.756934 kubelet[2271]: E0510 02:16:26.684850 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.756934 kubelet[2271]: W0510 02:16:26.684866 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.756934 kubelet[2271]: E0510 02:16:26.684883 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.756934 kubelet[2271]: E0510 02:16:26.685363 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.756934 kubelet[2271]: W0510 02:16:26.685381 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.756934 kubelet[2271]: E0510 02:16:26.685397 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.756934 kubelet[2271]: E0510 02:16:26.686493 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.757507 kubelet[2271]: W0510 02:16:26.686511 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.757507 kubelet[2271]: E0510 02:16:26.686527 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.757507 kubelet[2271]: E0510 02:16:26.687780 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.757507 kubelet[2271]: W0510 02:16:26.687811 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.757507 kubelet[2271]: E0510 02:16:26.687845 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.757507 kubelet[2271]: E0510 02:16:26.688211 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.757507 kubelet[2271]: W0510 02:16:26.688226 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.757507 kubelet[2271]: E0510 02:16:26.688243 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.757507 kubelet[2271]: E0510 02:16:26.688487 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.757507 kubelet[2271]: W0510 02:16:26.688502 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.758191 kubelet[2271]: E0510 02:16:26.688527 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.758191 kubelet[2271]: E0510 02:16:26.688808 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.758191 kubelet[2271]: W0510 02:16:26.688822 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.758191 kubelet[2271]: E0510 02:16:26.688838 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.758191 kubelet[2271]: E0510 02:16:26.689126 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.758191 kubelet[2271]: W0510 02:16:26.689141 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.758191 kubelet[2271]: E0510 02:16:26.689156 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.758191 kubelet[2271]: E0510 02:16:26.689424 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.758191 kubelet[2271]: W0510 02:16:26.689438 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.758191 kubelet[2271]: E0510 02:16:26.689455 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.758815 kubelet[2271]: E0510 02:16:26.689707 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.758815 kubelet[2271]: W0510 02:16:26.689722 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.758815 kubelet[2271]: E0510 02:16:26.689738 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.758815 kubelet[2271]: E0510 02:16:26.690025 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.758815 kubelet[2271]: W0510 02:16:26.690039 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.758815 kubelet[2271]: E0510 02:16:26.690055 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.758815 kubelet[2271]: E0510 02:16:26.750958 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.758815 kubelet[2271]: W0510 02:16:26.751007 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.758815 kubelet[2271]: E0510 02:16:26.751041 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.758815 kubelet[2271]: E0510 02:16:26.751443 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.759334 kubelet[2271]: W0510 02:16:26.751460 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.759334 kubelet[2271]: E0510 02:16:26.751483 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.759334 kubelet[2271]: E0510 02:16:26.751888 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.759334 kubelet[2271]: W0510 02:16:26.751913 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.759334 kubelet[2271]: E0510 02:16:26.751936 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.759334 kubelet[2271]: E0510 02:16:26.752296 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.759334 kubelet[2271]: W0510 02:16:26.752311 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.759334 kubelet[2271]: E0510 02:16:26.752346 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.759334 kubelet[2271]: E0510 02:16:26.752735 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.759334 kubelet[2271]: W0510 02:16:26.752750 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.771836 kubelet[2271]: E0510 02:16:26.752847 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.771836 kubelet[2271]: E0510 02:16:26.753135 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.771836 kubelet[2271]: W0510 02:16:26.753149 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.771836 kubelet[2271]: E0510 02:16:26.753238 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.771836 kubelet[2271]: E0510 02:16:26.753520 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.771836 kubelet[2271]: W0510 02:16:26.753546 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.771836 kubelet[2271]: E0510 02:16:26.753667 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.771836 kubelet[2271]: E0510 02:16:26.753945 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.771836 kubelet[2271]: W0510 02:16:26.753958 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.771836 kubelet[2271]: E0510 02:16:26.754025 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.773275 kubelet[2271]: E0510 02:16:26.754335 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.773275 kubelet[2271]: W0510 02:16:26.754356 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.773275 kubelet[2271]: E0510 02:16:26.754391 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.773275 kubelet[2271]: E0510 02:16:26.754843 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.773275 kubelet[2271]: W0510 02:16:26.754858 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.773275 kubelet[2271]: E0510 02:16:26.754874 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.773275 kubelet[2271]: E0510 02:16:26.755152 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.773275 kubelet[2271]: W0510 02:16:26.755167 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.773275 kubelet[2271]: E0510 02:16:26.755185 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.773275 kubelet[2271]: E0510 02:16:26.755449 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.775793 kubelet[2271]: W0510 02:16:26.755463 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.775793 kubelet[2271]: E0510 02:16:26.755478 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.775793 kubelet[2271]: E0510 02:16:26.758876 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.775793 kubelet[2271]: W0510 02:16:26.758892 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.775793 kubelet[2271]: E0510 02:16:26.758915 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.775793 kubelet[2271]: E0510 02:16:26.759280 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.775793 kubelet[2271]: W0510 02:16:26.759373 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.775793 kubelet[2271]: E0510 02:16:26.759399 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.775793 kubelet[2271]: E0510 02:16:26.759856 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.775793 kubelet[2271]: W0510 02:16:26.759899 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.777979 kubelet[2271]: E0510 02:16:26.759923 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.777979 kubelet[2271]: E0510 02:16:26.760275 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.777979 kubelet[2271]: W0510 02:16:26.760318 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.777979 kubelet[2271]: E0510 02:16:26.760338 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.777979 kubelet[2271]: E0510 02:16:26.760822 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.777979 kubelet[2271]: W0510 02:16:26.760836 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.777979 kubelet[2271]: E0510 02:16:26.760852 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.777979 kubelet[2271]: E0510 02:16:26.761722 2271 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 02:16:26.777979 kubelet[2271]: W0510 02:16:26.761737 2271 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 02:16:26.777979 kubelet[2271]: E0510 02:16:26.761754 2271 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 02:16:26.786227 env[1300]: time="2025-05-10T02:16:26.785851829Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:26.789724 env[1300]: time="2025-05-10T02:16:26.789688676Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:26.793347 env[1300]: time="2025-05-10T02:16:26.793312116Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:26.796965 env[1300]: time="2025-05-10T02:16:26.796909964Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:26.798252 env[1300]: time="2025-05-10T02:16:26.798205604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 10 02:16:26.801613 env[1300]: time="2025-05-10T02:16:26.801561861Z" level=info msg="CreateContainer within sandbox \"7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 10 02:16:26.820877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount772062847.mount: Deactivated successfully. May 10 02:16:26.831610 env[1300]: time="2025-05-10T02:16:26.831521107Z" level=info msg="CreateContainer within sandbox \"7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e\"" May 10 02:16:26.835121 env[1300]: time="2025-05-10T02:16:26.835037007Z" level=info msg="StartContainer for \"c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e\"" May 10 02:16:26.949142 env[1300]: time="2025-05-10T02:16:26.949066908Z" level=info msg="StartContainer for \"c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e\" returns successfully" May 10 02:16:26.995051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e-rootfs.mount: Deactivated successfully. May 10 02:16:27.016970 env[1300]: time="2025-05-10T02:16:27.016898395Z" level=info msg="shim disconnected" id=c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e May 10 02:16:27.017399 env[1300]: time="2025-05-10T02:16:27.017342055Z" level=warning msg="cleaning up after shim disconnected" id=c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e namespace=k8s.io May 10 02:16:27.017525 env[1300]: time="2025-05-10T02:16:27.017496698Z" level=info msg="cleaning up dead shim" May 10 02:16:27.032411 env[1300]: time="2025-05-10T02:16:27.031060041Z" level=warning msg="cleanup warnings time=\"2025-05-10T02:16:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2978 runtime=io.containerd.runc.v2\n" May 10 02:16:27.654435 env[1300]: time="2025-05-10T02:16:27.654351641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 10 02:16:28.518578 kubelet[2271]: E0510 02:16:28.518469 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msjzq" podUID="b62d1657-f00b-4957-b89b-113ee88c8696" May 10 02:16:30.519076 kubelet[2271]: E0510 02:16:30.517528 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msjzq" podUID="b62d1657-f00b-4957-b89b-113ee88c8696" May 10 02:16:32.517533 kubelet[2271]: E0510 02:16:32.516965 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msjzq" podUID="b62d1657-f00b-4957-b89b-113ee88c8696" May 10 02:16:34.519127 kubelet[2271]: E0510 02:16:34.517064 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msjzq" podUID="b62d1657-f00b-4957-b89b-113ee88c8696" May 10 02:16:35.408308 env[1300]: time="2025-05-10T02:16:35.408239339Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:35.410934 env[1300]: time="2025-05-10T02:16:35.410898209Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:35.413786 env[1300]: time="2025-05-10T02:16:35.413746367Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:35.416704 env[1300]: time="2025-05-10T02:16:35.416622471Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:35.417682 env[1300]: time="2025-05-10T02:16:35.417614554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 10 02:16:35.423347 env[1300]: time="2025-05-10T02:16:35.423282129Z" level=info msg="CreateContainer within sandbox \"7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 10 02:16:35.439522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3320469431.mount: Deactivated successfully. May 10 02:16:35.466990 env[1300]: time="2025-05-10T02:16:35.466932508Z" level=info msg="CreateContainer within sandbox \"7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d\"" May 10 02:16:35.469403 env[1300]: time="2025-05-10T02:16:35.469367543Z" level=info msg="StartContainer for \"fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d\"" May 10 02:16:35.563827 env[1300]: time="2025-05-10T02:16:35.563776671Z" level=info msg="StartContainer for \"fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d\" returns successfully" May 10 02:16:36.525747 kubelet[2271]: E0510 02:16:36.525672 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msjzq" podUID="b62d1657-f00b-4957-b89b-113ee88c8696" May 10 02:16:36.584889 env[1300]: time="2025-05-10T02:16:36.584806219Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 02:16:36.600404 kubelet[2271]: I0510 02:16:36.600358 2271 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 10 02:16:36.662522 kubelet[2271]: I0510 02:16:36.648561 2271 topology_manager.go:215] "Topology Admit Handler" podUID="3bda719f-2d0c-40dd-8013-db678548720f" podNamespace="calico-apiserver" podName="calico-apiserver-59c6df465c-t7qd5" May 10 02:16:36.662877 env[1300]: time="2025-05-10T02:16:36.661502310Z" level=info msg="shim disconnected" id=fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d May 10 02:16:36.662877 env[1300]: time="2025-05-10T02:16:36.661564611Z" level=warning msg="cleaning up after shim disconnected" id=fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d namespace=k8s.io May 10 02:16:36.662877 env[1300]: time="2025-05-10T02:16:36.661579691Z" level=info msg="cleaning up dead shim" May 10 02:16:36.659465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d-rootfs.mount: Deactivated successfully. May 10 02:16:36.680850 kubelet[2271]: I0510 02:16:36.680798 2271 topology_manager.go:215] "Topology Admit Handler" podUID="26d91028-bb97-4013-8daa-28d750dcefc1" podNamespace="calico-system" podName="calico-kube-controllers-6d9c5fc9f8-m6gc5" May 10 02:16:36.681067 kubelet[2271]: I0510 02:16:36.681046 2271 topology_manager.go:215] "Topology Admit Handler" podUID="033eccb2-2101-4733-a8e6-14ca25528bba" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tzgf6" May 10 02:16:36.681258 kubelet[2271]: I0510 02:16:36.681218 2271 topology_manager.go:215] "Topology Admit Handler" podUID="cd3e93bd-3e59-43f7-987b-d85581ad5591" podNamespace="calico-apiserver" podName="calico-apiserver-59c6df465c-9n96s" May 10 02:16:36.701730 kubelet[2271]: I0510 02:16:36.697434 2271 topology_manager.go:215] "Topology Admit Handler" podUID="ab904080-8c4d-457a-bf1e-8c5d5229dc67" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6qgjr" May 10 02:16:36.701730 kubelet[2271]: I0510 02:16:36.697780 2271 topology_manager.go:215] "Topology Admit Handler" podUID="d0632dba-48ae-4acf-8c28-096b5737e007" podNamespace="calico-apiserver" podName="calico-apiserver-67d5855849-n77bg" May 10 02:16:36.710960 env[1300]: time="2025-05-10T02:16:36.710901752Z" level=warning msg="cleanup warnings time=\"2025-05-10T02:16:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3047 runtime=io.containerd.runc.v2\n" May 10 02:16:36.730252 kubelet[2271]: I0510 02:16:36.730189 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3bda719f-2d0c-40dd-8013-db678548720f-calico-apiserver-certs\") pod \"calico-apiserver-59c6df465c-t7qd5\" (UID: \"3bda719f-2d0c-40dd-8013-db678548720f\") " pod="calico-apiserver/calico-apiserver-59c6df465c-t7qd5" May 10 02:16:36.730706 kubelet[2271]: I0510 02:16:36.730631 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2smzl\" (UniqueName: \"kubernetes.io/projected/3bda719f-2d0c-40dd-8013-db678548720f-kube-api-access-2smzl\") pod \"calico-apiserver-59c6df465c-t7qd5\" (UID: \"3bda719f-2d0c-40dd-8013-db678548720f\") " pod="calico-apiserver/calico-apiserver-59c6df465c-t7qd5" May 10 02:16:36.745294 env[1300]: time="2025-05-10T02:16:36.745251809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 10 02:16:36.833781 kubelet[2271]: I0510 02:16:36.833705 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cd3e93bd-3e59-43f7-987b-d85581ad5591-calico-apiserver-certs\") pod \"calico-apiserver-59c6df465c-9n96s\" (UID: \"cd3e93bd-3e59-43f7-987b-d85581ad5591\") " pod="calico-apiserver/calico-apiserver-59c6df465c-9n96s" May 10 02:16:36.834221 kubelet[2271]: I0510 02:16:36.834191 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d0632dba-48ae-4acf-8c28-096b5737e007-calico-apiserver-certs\") pod \"calico-apiserver-67d5855849-n77bg\" (UID: \"d0632dba-48ae-4acf-8c28-096b5737e007\") " pod="calico-apiserver/calico-apiserver-67d5855849-n77bg" May 10 02:16:36.834823 kubelet[2271]: I0510 02:16:36.834409 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrzq8\" (UniqueName: \"kubernetes.io/projected/26d91028-bb97-4013-8daa-28d750dcefc1-kube-api-access-qrzq8\") pod \"calico-kube-controllers-6d9c5fc9f8-m6gc5\" (UID: \"26d91028-bb97-4013-8daa-28d750dcefc1\") " pod="calico-system/calico-kube-controllers-6d9c5fc9f8-m6gc5" May 10 02:16:36.834823 kubelet[2271]: I0510 02:16:36.834558 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/033eccb2-2101-4733-a8e6-14ca25528bba-config-volume\") pod \"coredns-7db6d8ff4d-tzgf6\" (UID: \"033eccb2-2101-4733-a8e6-14ca25528bba\") " pod="kube-system/coredns-7db6d8ff4d-tzgf6" May 10 02:16:36.834823 kubelet[2271]: I0510 02:16:36.834601 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab904080-8c4d-457a-bf1e-8c5d5229dc67-config-volume\") pod \"coredns-7db6d8ff4d-6qgjr\" (UID: \"ab904080-8c4d-457a-bf1e-8c5d5229dc67\") " pod="kube-system/coredns-7db6d8ff4d-6qgjr" May 10 02:16:36.835088 kubelet[2271]: I0510 02:16:36.835061 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwqjv\" (UniqueName: \"kubernetes.io/projected/cd3e93bd-3e59-43f7-987b-d85581ad5591-kube-api-access-jwqjv\") pod \"calico-apiserver-59c6df465c-9n96s\" (UID: \"cd3e93bd-3e59-43f7-987b-d85581ad5591\") " pod="calico-apiserver/calico-apiserver-59c6df465c-9n96s" May 10 02:16:36.835281 kubelet[2271]: I0510 02:16:36.835253 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9j9b\" (UniqueName: \"kubernetes.io/projected/d0632dba-48ae-4acf-8c28-096b5737e007-kube-api-access-h9j9b\") pod \"calico-apiserver-67d5855849-n77bg\" (UID: \"d0632dba-48ae-4acf-8c28-096b5737e007\") " pod="calico-apiserver/calico-apiserver-67d5855849-n77bg" May 10 02:16:36.835459 kubelet[2271]: I0510 02:16:36.835421 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26d91028-bb97-4013-8daa-28d750dcefc1-tigera-ca-bundle\") pod \"calico-kube-controllers-6d9c5fc9f8-m6gc5\" (UID: \"26d91028-bb97-4013-8daa-28d750dcefc1\") " pod="calico-system/calico-kube-controllers-6d9c5fc9f8-m6gc5" May 10 02:16:36.835638 kubelet[2271]: I0510 02:16:36.835594 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smkql\" (UniqueName: \"kubernetes.io/projected/033eccb2-2101-4733-a8e6-14ca25528bba-kube-api-access-smkql\") pod \"coredns-7db6d8ff4d-tzgf6\" (UID: \"033eccb2-2101-4733-a8e6-14ca25528bba\") " pod="kube-system/coredns-7db6d8ff4d-tzgf6" May 10 02:16:36.835897 kubelet[2271]: I0510 02:16:36.835857 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz666\" (UniqueName: \"kubernetes.io/projected/ab904080-8c4d-457a-bf1e-8c5d5229dc67-kube-api-access-kz666\") pod \"coredns-7db6d8ff4d-6qgjr\" (UID: \"ab904080-8c4d-457a-bf1e-8c5d5229dc67\") " pod="kube-system/coredns-7db6d8ff4d-6qgjr" May 10 02:16:36.988126 env[1300]: time="2025-05-10T02:16:36.987575006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6df465c-t7qd5,Uid:3bda719f-2d0c-40dd-8013-db678548720f,Namespace:calico-apiserver,Attempt:0,}" May 10 02:16:37.023891 env[1300]: time="2025-05-10T02:16:37.023834456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d5855849-n77bg,Uid:d0632dba-48ae-4acf-8c28-096b5737e007,Namespace:calico-apiserver,Attempt:0,}" May 10 02:16:37.026360 env[1300]: time="2025-05-10T02:16:37.026321098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9c5fc9f8-m6gc5,Uid:26d91028-bb97-4013-8daa-28d750dcefc1,Namespace:calico-system,Attempt:0,}" May 10 02:16:37.032213 env[1300]: time="2025-05-10T02:16:37.032174565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6df465c-9n96s,Uid:cd3e93bd-3e59-43f7-987b-d85581ad5591,Namespace:calico-apiserver,Attempt:0,}" May 10 02:16:37.053907 env[1300]: time="2025-05-10T02:16:37.053864003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6qgjr,Uid:ab904080-8c4d-457a-bf1e-8c5d5229dc67,Namespace:kube-system,Attempt:0,}" May 10 02:16:37.065366 env[1300]: time="2025-05-10T02:16:37.065309570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tzgf6,Uid:033eccb2-2101-4733-a8e6-14ca25528bba,Namespace:kube-system,Attempt:0,}" May 10 02:16:37.383407 env[1300]: time="2025-05-10T02:16:37.383294211Z" level=error msg="Failed to destroy network for sandbox \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.383926 env[1300]: time="2025-05-10T02:16:37.383876634Z" level=error msg="encountered an error cleaning up failed sandbox \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.384020 env[1300]: time="2025-05-10T02:16:37.383946664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6df465c-t7qd5,Uid:3bda719f-2d0c-40dd-8013-db678548720f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.385855 kubelet[2271]: E0510 02:16:37.384366 2271 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.386039 kubelet[2271]: E0510 02:16:37.385922 2271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59c6df465c-t7qd5" May 10 02:16:37.386039 kubelet[2271]: E0510 02:16:37.386002 2271 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59c6df465c-t7qd5" May 10 02:16:37.386674 kubelet[2271]: E0510 02:16:37.386122 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59c6df465c-t7qd5_calico-apiserver(3bda719f-2d0c-40dd-8013-db678548720f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59c6df465c-t7qd5_calico-apiserver(3bda719f-2d0c-40dd-8013-db678548720f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59c6df465c-t7qd5" podUID="3bda719f-2d0c-40dd-8013-db678548720f" May 10 02:16:37.420556 env[1300]: time="2025-05-10T02:16:37.420460554Z" level=error msg="Failed to destroy network for sandbox \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.421077 env[1300]: time="2025-05-10T02:16:37.421012989Z" level=error msg="encountered an error cleaning up failed sandbox \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.421668 env[1300]: time="2025-05-10T02:16:37.421098823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9c5fc9f8-m6gc5,Uid:26d91028-bb97-4013-8daa-28d750dcefc1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.421812 kubelet[2271]: E0510 02:16:37.421472 2271 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.421812 kubelet[2271]: E0510 02:16:37.421566 2271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d9c5fc9f8-m6gc5" May 10 02:16:37.421812 kubelet[2271]: E0510 02:16:37.421653 2271 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d9c5fc9f8-m6gc5" May 10 02:16:37.422005 kubelet[2271]: E0510 02:16:37.421751 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d9c5fc9f8-m6gc5_calico-system(26d91028-bb97-4013-8daa-28d750dcefc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d9c5fc9f8-m6gc5_calico-system(26d91028-bb97-4013-8daa-28d750dcefc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d9c5fc9f8-m6gc5" podUID="26d91028-bb97-4013-8daa-28d750dcefc1" May 10 02:16:37.441818 env[1300]: time="2025-05-10T02:16:37.441743559Z" level=error msg="Failed to destroy network for sandbox \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.442698 env[1300]: time="2025-05-10T02:16:37.442577128Z" level=error msg="encountered an error cleaning up failed sandbox \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.442904 env[1300]: time="2025-05-10T02:16:37.442847605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tzgf6,Uid:033eccb2-2101-4733-a8e6-14ca25528bba,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.443433 kubelet[2271]: E0510 02:16:37.443377 2271 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.443564 kubelet[2271]: E0510 02:16:37.443476 2271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tzgf6" May 10 02:16:37.443564 kubelet[2271]: E0510 02:16:37.443532 2271 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tzgf6" May 10 02:16:37.443739 kubelet[2271]: E0510 02:16:37.443597 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tzgf6_kube-system(033eccb2-2101-4733-a8e6-14ca25528bba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tzgf6_kube-system(033eccb2-2101-4733-a8e6-14ca25528bba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tzgf6" podUID="033eccb2-2101-4733-a8e6-14ca25528bba" May 10 02:16:37.449244 env[1300]: time="2025-05-10T02:16:37.449172735Z" level=error msg="Failed to destroy network for sandbox \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.449930 env[1300]: time="2025-05-10T02:16:37.449884847Z" level=error msg="encountered an error cleaning up failed sandbox \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.450134 env[1300]: time="2025-05-10T02:16:37.450073567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d5855849-n77bg,Uid:d0632dba-48ae-4acf-8c28-096b5737e007,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.450850 kubelet[2271]: E0510 02:16:37.450528 2271 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.450850 kubelet[2271]: E0510 02:16:37.450637 2271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67d5855849-n77bg" May 10 02:16:37.450850 kubelet[2271]: E0510 02:16:37.450695 2271 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67d5855849-n77bg" May 10 02:16:37.451079 kubelet[2271]: E0510 02:16:37.450780 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67d5855849-n77bg_calico-apiserver(d0632dba-48ae-4acf-8c28-096b5737e007)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67d5855849-n77bg_calico-apiserver(d0632dba-48ae-4acf-8c28-096b5737e007)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67d5855849-n77bg" podUID="d0632dba-48ae-4acf-8c28-096b5737e007" May 10 02:16:37.455377 env[1300]: time="2025-05-10T02:16:37.455287828Z" level=error msg="Failed to destroy network for sandbox \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.456337 env[1300]: time="2025-05-10T02:16:37.456291244Z" level=error msg="encountered an error cleaning up failed sandbox \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.456597 env[1300]: time="2025-05-10T02:16:37.456530501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6qgjr,Uid:ab904080-8c4d-457a-bf1e-8c5d5229dc67,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.456965 kubelet[2271]: E0510 02:16:37.456923 2271 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.457077 kubelet[2271]: E0510 02:16:37.456983 2271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6qgjr" May 10 02:16:37.457077 kubelet[2271]: E0510 02:16:37.457009 2271 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6qgjr" May 10 02:16:37.457595 kubelet[2271]: E0510 02:16:37.457074 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6qgjr_kube-system(ab904080-8c4d-457a-bf1e-8c5d5229dc67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6qgjr_kube-system(ab904080-8c4d-457a-bf1e-8c5d5229dc67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6qgjr" podUID="ab904080-8c4d-457a-bf1e-8c5d5229dc67" May 10 02:16:37.465768 env[1300]: time="2025-05-10T02:16:37.465615128Z" level=error msg="Failed to destroy network for sandbox \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.466521 env[1300]: time="2025-05-10T02:16:37.466465728Z" level=error msg="encountered an error cleaning up failed sandbox \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.466746 env[1300]: time="2025-05-10T02:16:37.466699707Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6df465c-9n96s,Uid:cd3e93bd-3e59-43f7-987b-d85581ad5591,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.467745 kubelet[2271]: E0510 02:16:37.467348 2271 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.467745 kubelet[2271]: E0510 02:16:37.467423 2271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59c6df465c-9n96s" May 10 02:16:37.467745 kubelet[2271]: E0510 02:16:37.467531 2271 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59c6df465c-9n96s" May 10 02:16:37.468214 kubelet[2271]: E0510 02:16:37.467660 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59c6df465c-9n96s_calico-apiserver(cd3e93bd-3e59-43f7-987b-d85581ad5591)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59c6df465c-9n96s_calico-apiserver(cd3e93bd-3e59-43f7-987b-d85581ad5591)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59c6df465c-9n96s" podUID="cd3e93bd-3e59-43f7-987b-d85581ad5591" May 10 02:16:37.743401 kubelet[2271]: I0510 02:16:37.743197 2271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" May 10 02:16:37.751259 kubelet[2271]: I0510 02:16:37.751185 2271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" May 10 02:16:37.759590 env[1300]: time="2025-05-10T02:16:37.759506721Z" level=info msg="StopPodSandbox for \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\"" May 10 02:16:37.762312 env[1300]: time="2025-05-10T02:16:37.759597950Z" level=info msg="StopPodSandbox for \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\"" May 10 02:16:37.769993 kubelet[2271]: I0510 02:16:37.769952 2271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" May 10 02:16:37.772785 env[1300]: time="2025-05-10T02:16:37.771168321Z" level=info msg="StopPodSandbox for \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\"" May 10 02:16:37.772937 kubelet[2271]: I0510 02:16:37.772751 2271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:16:37.774373 env[1300]: time="2025-05-10T02:16:37.774332018Z" level=info msg="StopPodSandbox for \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\"" May 10 02:16:37.778256 kubelet[2271]: I0510 02:16:37.778211 2271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" May 10 02:16:37.778850 env[1300]: time="2025-05-10T02:16:37.778811052Z" level=info msg="StopPodSandbox for \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\"" May 10 02:16:37.781429 kubelet[2271]: I0510 02:16:37.780711 2271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:16:37.783378 env[1300]: time="2025-05-10T02:16:37.783300167Z" level=info msg="StopPodSandbox for \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\"" May 10 02:16:37.881063 env[1300]: time="2025-05-10T02:16:37.880984167Z" level=error msg="StopPodSandbox for \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\" failed" error="failed to destroy network for sandbox \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.881609 kubelet[2271]: E0510 02:16:37.881549 2271 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" May 10 02:16:37.881770 kubelet[2271]: E0510 02:16:37.881657 2271 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e"} May 10 02:16:37.881833 kubelet[2271]: E0510 02:16:37.881785 2271 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"033eccb2-2101-4733-a8e6-14ca25528bba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 02:16:37.881833 kubelet[2271]: E0510 02:16:37.881819 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"033eccb2-2101-4733-a8e6-14ca25528bba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tzgf6" podUID="033eccb2-2101-4733-a8e6-14ca25528bba" May 10 02:16:37.895118 env[1300]: time="2025-05-10T02:16:37.895044173Z" level=error msg="StopPodSandbox for \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\" failed" error="failed to destroy network for sandbox \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.895541 kubelet[2271]: E0510 02:16:37.895485 2271 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" May 10 02:16:37.895711 kubelet[2271]: E0510 02:16:37.895551 2271 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9"} May 10 02:16:37.895711 kubelet[2271]: E0510 02:16:37.895590 2271 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d0632dba-48ae-4acf-8c28-096b5737e007\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 02:16:37.895711 kubelet[2271]: E0510 02:16:37.895617 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d0632dba-48ae-4acf-8c28-096b5737e007\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67d5855849-n77bg" podUID="d0632dba-48ae-4acf-8c28-096b5737e007" May 10 02:16:37.938356 env[1300]: time="2025-05-10T02:16:37.938278866Z" level=error msg="StopPodSandbox for \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\" failed" error="failed to destroy network for sandbox \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.938947 kubelet[2271]: E0510 02:16:37.938895 2271 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" May 10 02:16:37.939094 kubelet[2271]: E0510 02:16:37.938990 2271 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33"} May 10 02:16:37.939094 kubelet[2271]: E0510 02:16:37.939042 2271 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab904080-8c4d-457a-bf1e-8c5d5229dc67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 02:16:37.939634 kubelet[2271]: E0510 02:16:37.939080 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab904080-8c4d-457a-bf1e-8c5d5229dc67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6qgjr" podUID="ab904080-8c4d-457a-bf1e-8c5d5229dc67" May 10 02:16:37.943492 env[1300]: time="2025-05-10T02:16:37.943434836Z" level=error msg="StopPodSandbox for \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\" failed" error="failed to destroy network for sandbox \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.944283 kubelet[2271]: E0510 02:16:37.944217 2271 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:16:37.944387 kubelet[2271]: E0510 02:16:37.944293 2271 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941"} May 10 02:16:37.944457 kubelet[2271]: E0510 02:16:37.944402 2271 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cd3e93bd-3e59-43f7-987b-d85581ad5591\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 02:16:37.944457 kubelet[2271]: E0510 02:16:37.944438 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cd3e93bd-3e59-43f7-987b-d85581ad5591\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59c6df465c-9n96s" podUID="cd3e93bd-3e59-43f7-987b-d85581ad5591" May 10 02:16:37.950754 env[1300]: time="2025-05-10T02:16:37.950692652Z" level=error msg="StopPodSandbox for \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\" failed" error="failed to destroy network for sandbox \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.950943 kubelet[2271]: E0510 02:16:37.950901 2271 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:16:37.951049 kubelet[2271]: E0510 02:16:37.950950 2271 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d"} May 10 02:16:37.951049 kubelet[2271]: E0510 02:16:37.950984 2271 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3bda719f-2d0c-40dd-8013-db678548720f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 02:16:37.951049 kubelet[2271]: E0510 02:16:37.951012 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3bda719f-2d0c-40dd-8013-db678548720f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59c6df465c-t7qd5" podUID="3bda719f-2d0c-40dd-8013-db678548720f" May 10 02:16:37.954360 env[1300]: time="2025-05-10T02:16:37.954310826Z" level=error msg="StopPodSandbox for \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\" failed" error="failed to destroy network for sandbox \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:37.954535 kubelet[2271]: E0510 02:16:37.954483 2271 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" May 10 02:16:37.954535 kubelet[2271]: E0510 02:16:37.954532 2271 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05"} May 10 02:16:37.954769 kubelet[2271]: E0510 02:16:37.954568 2271 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26d91028-bb97-4013-8daa-28d750dcefc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 02:16:37.954769 kubelet[2271]: E0510 02:16:37.954598 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26d91028-bb97-4013-8daa-28d750dcefc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d9c5fc9f8-m6gc5" podUID="26d91028-bb97-4013-8daa-28d750dcefc1" May 10 02:16:38.528271 env[1300]: time="2025-05-10T02:16:38.528217824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-msjzq,Uid:b62d1657-f00b-4957-b89b-113ee88c8696,Namespace:calico-system,Attempt:0,}" May 10 02:16:38.686620 env[1300]: time="2025-05-10T02:16:38.686528492Z" level=error msg="Failed to destroy network for sandbox \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:38.690031 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f-shm.mount: Deactivated successfully. May 10 02:16:38.695195 env[1300]: time="2025-05-10T02:16:38.695151134Z" level=error msg="encountered an error cleaning up failed sandbox \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:38.695428 env[1300]: time="2025-05-10T02:16:38.695379619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-msjzq,Uid:b62d1657-f00b-4957-b89b-113ee88c8696,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:38.695909 kubelet[2271]: E0510 02:16:38.695851 2271 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:38.696579 kubelet[2271]: E0510 02:16:38.695947 2271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-msjzq" May 10 02:16:38.696579 kubelet[2271]: E0510 02:16:38.696020 2271 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-msjzq" May 10 02:16:38.696579 kubelet[2271]: E0510 02:16:38.696109 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-msjzq_calico-system(b62d1657-f00b-4957-b89b-113ee88c8696)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-msjzq_calico-system(b62d1657-f00b-4957-b89b-113ee88c8696)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-msjzq" podUID="b62d1657-f00b-4957-b89b-113ee88c8696" May 10 02:16:38.786167 kubelet[2271]: I0510 02:16:38.785988 2271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" May 10 02:16:38.788137 env[1300]: time="2025-05-10T02:16:38.787863119Z" level=info msg="StopPodSandbox for \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\"" May 10 02:16:38.851930 env[1300]: time="2025-05-10T02:16:38.851852960Z" level=error msg="StopPodSandbox for \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\" failed" error="failed to destroy network for sandbox \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:38.853267 kubelet[2271]: E0510 02:16:38.852623 2271 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" May 10 02:16:38.853267 kubelet[2271]: E0510 02:16:38.852764 2271 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f"} May 10 02:16:38.853267 kubelet[2271]: E0510 02:16:38.852823 2271 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b62d1657-f00b-4957-b89b-113ee88c8696\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 02:16:38.853267 kubelet[2271]: E0510 02:16:38.852861 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b62d1657-f00b-4957-b89b-113ee88c8696\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-msjzq" podUID="b62d1657-f00b-4957-b89b-113ee88c8696" May 10 02:16:48.522899 env[1300]: time="2025-05-10T02:16:48.522832065Z" level=info msg="StopPodSandbox for \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\"" May 10 02:16:48.639598 env[1300]: time="2025-05-10T02:16:48.639499912Z" level=error msg="StopPodSandbox for \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\" failed" error="failed to destroy network for sandbox \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:48.643872 kubelet[2271]: E0510 02:16:48.643596 2271 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:16:48.643872 kubelet[2271]: E0510 02:16:48.643734 2271 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d"} May 10 02:16:48.643872 kubelet[2271]: E0510 02:16:48.643789 2271 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3bda719f-2d0c-40dd-8013-db678548720f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 02:16:48.643872 kubelet[2271]: E0510 02:16:48.643828 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3bda719f-2d0c-40dd-8013-db678548720f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59c6df465c-t7qd5" podUID="3bda719f-2d0c-40dd-8013-db678548720f" May 10 02:16:49.081011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1856985045.mount: Deactivated successfully. May 10 02:16:49.212584 env[1300]: time="2025-05-10T02:16:49.212508342Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:49.214517 env[1300]: time="2025-05-10T02:16:49.214482370Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:49.217386 env[1300]: time="2025-05-10T02:16:49.216459529Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:49.218649 env[1300]: time="2025-05-10T02:16:49.218595640Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:49.219877 env[1300]: time="2025-05-10T02:16:49.219735769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 10 02:16:49.271666 env[1300]: time="2025-05-10T02:16:49.271027225Z" level=info msg="CreateContainer within sandbox \"7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 10 02:16:49.290566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3058388461.mount: Deactivated successfully. May 10 02:16:49.297038 env[1300]: time="2025-05-10T02:16:49.296984025Z" level=info msg="CreateContainer within sandbox \"7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d\"" May 10 02:16:49.300698 env[1300]: time="2025-05-10T02:16:49.298806914Z" level=info msg="StartContainer for \"0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d\"" May 10 02:16:49.406179 env[1300]: time="2025-05-10T02:16:49.405593059Z" level=info msg="StartContainer for \"0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d\" returns successfully" May 10 02:16:49.517637 env[1300]: time="2025-05-10T02:16:49.517560559Z" level=info msg="StopPodSandbox for \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\"" May 10 02:16:49.564434 env[1300]: time="2025-05-10T02:16:49.564329051Z" level=error msg="StopPodSandbox for \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\" failed" error="failed to destroy network for sandbox \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 02:16:49.566051 kubelet[2271]: E0510 02:16:49.565944 2271 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:16:49.566226 kubelet[2271]: E0510 02:16:49.566074 2271 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941"} May 10 02:16:49.566341 kubelet[2271]: E0510 02:16:49.566305 2271 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cd3e93bd-3e59-43f7-987b-d85581ad5591\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 02:16:49.566490 kubelet[2271]: E0510 02:16:49.566372 2271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cd3e93bd-3e59-43f7-987b-d85581ad5591\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59c6df465c-9n96s" podUID="cd3e93bd-3e59-43f7-987b-d85581ad5591" May 10 02:16:49.694666 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 10 02:16:49.695507 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 10 02:16:49.877736 kubelet[2271]: I0510 02:16:49.874269 2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ctnlg" podStartSLOduration=1.846243552 podStartE2EDuration="29.868018712s" podCreationTimestamp="2025-05-10 02:16:20 +0000 UTC" firstStartedPulling="2025-05-10 02:16:21.201166121 +0000 UTC m=+20.984347642" lastFinishedPulling="2025-05-10 02:16:49.222941274 +0000 UTC m=+49.006122802" observedRunningTime="2025-05-10 02:16:49.866567279 +0000 UTC m=+49.649748812" watchObservedRunningTime="2025-05-10 02:16:49.868018712 +0000 UTC m=+49.651200244" May 10 02:16:50.518209 env[1300]: time="2025-05-10T02:16:50.518119381Z" level=info msg="StopPodSandbox for \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\"" May 10 02:16:50.861169 env[1300]: 2025-05-10 02:16:50.641 [INFO][3553] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" May 10 02:16:50.861169 env[1300]: 2025-05-10 02:16:50.641 [INFO][3553] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" iface="eth0" netns="/var/run/netns/cni-169fad01-3676-67dc-62a7-41ee97fa3737" May 10 02:16:50.861169 env[1300]: 2025-05-10 02:16:50.643 [INFO][3553] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" iface="eth0" netns="/var/run/netns/cni-169fad01-3676-67dc-62a7-41ee97fa3737" May 10 02:16:50.861169 env[1300]: 2025-05-10 02:16:50.646 [INFO][3553] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" iface="eth0" netns="/var/run/netns/cni-169fad01-3676-67dc-62a7-41ee97fa3737" May 10 02:16:50.861169 env[1300]: 2025-05-10 02:16:50.646 [INFO][3553] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" May 10 02:16:50.861169 env[1300]: 2025-05-10 02:16:50.646 [INFO][3553] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" May 10 02:16:50.861169 env[1300]: 2025-05-10 02:16:50.828 [INFO][3560] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" HandleID="k8s-pod-network.ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" May 10 02:16:50.861169 env[1300]: 2025-05-10 02:16:50.830 [INFO][3560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:16:50.861169 env[1300]: 2025-05-10 02:16:50.831 [INFO][3560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:16:50.861169 env[1300]: 2025-05-10 02:16:50.853 [WARNING][3560] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" HandleID="k8s-pod-network.ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" May 10 02:16:50.861169 env[1300]: 2025-05-10 02:16:50.854 [INFO][3560] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" HandleID="k8s-pod-network.ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" May 10 02:16:50.861169 env[1300]: 2025-05-10 02:16:50.856 [INFO][3560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:16:50.861169 env[1300]: 2025-05-10 02:16:50.859 [INFO][3553] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" May 10 02:16:50.872918 systemd[1]: run-containerd-runc-k8s.io-0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d-runc.ZewCIy.mount: Deactivated successfully. May 10 02:16:50.877138 env[1300]: time="2025-05-10T02:16:50.877075542Z" level=info msg="TearDown network for sandbox \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\" successfully" May 10 02:16:50.877305 env[1300]: time="2025-05-10T02:16:50.877271024Z" level=info msg="StopPodSandbox for \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\" returns successfully" May 10 02:16:50.878429 systemd[1]: run-netns-cni\x2d169fad01\x2d3676\x2d67dc\x2d62a7\x2d41ee97fa3737.mount: Deactivated successfully. May 10 02:16:50.880708 env[1300]: time="2025-05-10T02:16:50.880672178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tzgf6,Uid:033eccb2-2101-4733-a8e6-14ca25528bba,Namespace:kube-system,Attempt:1,}" May 10 02:16:51.205237 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 10 02:16:51.205601 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8d86b0ea5b4: link becomes ready May 10 02:16:51.217519 systemd-networkd[1074]: cali8d86b0ea5b4: Link UP May 10 02:16:51.217880 systemd-networkd[1074]: cali8d86b0ea5b4: Gained carrier May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.010 [INFO][3579] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.035 [INFO][3579] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0 coredns-7db6d8ff4d- kube-system 033eccb2-2101-4733-a8e6-14ca25528bba 794 0 2025-05-10 02:16:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-it8yl.gb1.brightbox.com coredns-7db6d8ff4d-tzgf6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8d86b0ea5b4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tzgf6" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-" May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.035 [INFO][3579] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tzgf6" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.096 [INFO][3603] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" HandleID="k8s-pod-network.dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.110 [INFO][3603] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" HandleID="k8s-pod-network.dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b500), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-it8yl.gb1.brightbox.com", "pod":"coredns-7db6d8ff4d-tzgf6", "timestamp":"2025-05-10 02:16:51.095990629 +0000 UTC"}, Hostname:"srv-it8yl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.111 [INFO][3603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.111 [INFO][3603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.111 [INFO][3603] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-it8yl.gb1.brightbox.com' May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.114 [INFO][3603] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.127 [INFO][3603] ipam/ipam.go 372: Looking up existing affinities for host host="srv-it8yl.gb1.brightbox.com" May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.134 [INFO][3603] ipam/ipam.go 489: Trying affinity for 192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.137 [INFO][3603] ipam/ipam.go 155: Attempting to load block cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.140 [INFO][3603] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.140 [INFO][3603] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.142 [INFO][3603] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36 May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.151 [INFO][3603] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.161 [INFO][3603] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.123.129/26] block=192.168.123.128/26 handle="k8s-pod-network.dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.161 [INFO][3603] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.123.129/26] handle="k8s-pod-network.dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.161 [INFO][3603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:16:51.252212 env[1300]: 2025-05-10 02:16:51.161 [INFO][3603] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.129/26] IPv6=[] ContainerID="dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" HandleID="k8s-pod-network.dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" May 10 02:16:51.253705 env[1300]: 2025-05-10 02:16:51.166 [INFO][3579] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tzgf6" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"033eccb2-2101-4733-a8e6-14ca25528bba", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7db6d8ff4d-tzgf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d86b0ea5b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:16:51.253705 env[1300]: 2025-05-10 02:16:51.166 [INFO][3579] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.123.129/32] ContainerID="dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tzgf6" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" May 10 02:16:51.253705 env[1300]: 2025-05-10 02:16:51.166 [INFO][3579] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d86b0ea5b4 ContainerID="dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tzgf6" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" May 10 02:16:51.253705 env[1300]: 2025-05-10 02:16:51.214 [INFO][3579] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tzgf6" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" May 10 02:16:51.253705 env[1300]: 2025-05-10 02:16:51.214 [INFO][3579] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tzgf6" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"033eccb2-2101-4733-a8e6-14ca25528bba", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36", Pod:"coredns-7db6d8ff4d-tzgf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d86b0ea5b4", MAC:"12:a5:bc:74:67:a4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:16:51.253705 env[1300]: 2025-05-10 02:16:51.247 [INFO][3579] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tzgf6" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" May 10 02:16:51.285005 env[1300]: time="2025-05-10T02:16:51.284900272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 02:16:51.285502 env[1300]: time="2025-05-10T02:16:51.285454888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 02:16:51.285817 env[1300]: time="2025-05-10T02:16:51.285757934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 02:16:51.288509 env[1300]: time="2025-05-10T02:16:51.288449927Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36 pid=3658 runtime=io.containerd.runc.v2 May 10 02:16:51.413711 systemd[1]: run-containerd-runc-k8s.io-dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36-runc.K2uHfL.mount: Deactivated successfully. May 10 02:16:51.502060 kernel: audit: type=1400 audit(1746843411.491:305): avc: denied { write } for pid=3709 comm="tee" name="fd" dev="proc" ino=30366 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:16:51.491000 audit[3709]: AVC avc: denied { write } for pid=3709 comm="tee" name="fd" dev="proc" ino=30366 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:16:51.522325 kernel: audit: type=1300 audit(1746843411.491:305): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd4267da06 a2=241 a3=1b6 items=1 ppid=3633 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:51.491000 audit[3709]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd4267da06 a2=241 a3=1b6 items=1 ppid=3633 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:51.526357 env[1300]: time="2025-05-10T02:16:51.526293753Z" level=info msg="StopPodSandbox for \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\"" May 10 02:16:51.491000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 10 02:16:51.530661 kernel: audit: type=1307 audit(1746843411.491:305): cwd="/etc/service/enabled/bird6/log" May 10 02:16:51.491000 audit: PATH item=0 name="/dev/fd/63" inode=30353 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:16:51.546756 kernel: audit: type=1302 audit(1746843411.491:305): item=0 name="/dev/fd/63" inode=30353 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:16:51.491000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 10 02:16:51.558720 kernel: audit: type=1327 audit(1746843411.491:305): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 10 02:16:51.562313 env[1300]: time="2025-05-10T02:16:51.562267068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tzgf6,Uid:033eccb2-2101-4733-a8e6-14ca25528bba,Namespace:kube-system,Attempt:1,} returns sandbox id \"dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36\"" May 10 02:16:51.528000 audit[3707]: AVC avc: denied { write } for pid=3707 comm="tee" name="fd" dev="proc" ino=30370 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:16:51.568693 kernel: audit: type=1400 audit(1746843411.528:306): avc: denied { write } for pid=3707 comm="tee" name="fd" dev="proc" ino=30370 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:16:51.581810 kernel: audit: type=1400 audit(1746843411.529:307): avc: denied { write } for pid=3688 comm="tee" name="fd" dev="proc" ino=30371 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:16:51.529000 audit[3688]: AVC avc: denied { write } for pid=3688 comm="tee" name="fd" dev="proc" ino=30371 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:16:51.590474 env[1300]: time="2025-05-10T02:16:51.590433152Z" level=info msg="CreateContainer within sandbox \"dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 02:16:51.601605 kernel: audit: type=1300 audit(1746843411.528:306): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd415aaa08 a2=241 a3=1b6 items=1 ppid=3628 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:51.528000 audit[3707]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd415aaa08 a2=241 a3=1b6 items=1 ppid=3628 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:51.528000 audit: CWD cwd="/etc/service/enabled/cni/log" May 10 02:16:51.608751 kernel: audit: type=1307 audit(1746843411.528:306): cwd="/etc/service/enabled/cni/log" May 10 02:16:51.528000 audit: PATH item=0 name="/dev/fd/63" inode=30352 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:16:51.614735 kernel: audit: type=1302 audit(1746843411.528:306): item=0 name="/dev/fd/63" inode=30352 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:16:51.528000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 10 02:16:51.529000 audit[3688]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe37b33a06 a2=241 a3=1b6 items=1 ppid=3619 pid=3688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:51.529000 audit: CWD cwd="/etc/service/enabled/felix/log" May 10 02:16:51.529000 audit: PATH item=0 name="/dev/fd/63" inode=29341 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:16:51.529000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 10 02:16:51.535000 audit[3693]: AVC avc: denied { write } for pid=3693 comm="tee" name="fd" dev="proc" ino=30374 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:16:51.535000 audit[3693]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff6ea31a06 a2=241 a3=1b6 items=1 ppid=3627 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:51.535000 audit: CWD cwd="/etc/service/enabled/confd/log" May 10 02:16:51.535000 audit: PATH item=0 name="/dev/fd/63" inode=29373 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:16:51.535000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 10 02:16:51.546000 audit[3718]: AVC avc: denied { write } for pid=3718 comm="tee" name="fd" dev="proc" ino=30379 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:16:51.546000 audit[3718]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff07998a07 a2=241 a3=1b6 items=1 ppid=3621 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:51.546000 audit: CWD cwd="/etc/service/enabled/bird/log" May 10 02:16:51.546000 audit: PATH item=0 name="/dev/fd/63" inode=30356 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:16:51.546000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 10 02:16:51.549000 audit[3723]: AVC avc: denied { write } for pid=3723 comm="tee" name="fd" dev="proc" ino=30385 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:16:51.549000 audit[3723]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffa46169f6 a2=241 a3=1b6 items=1 ppid=3632 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:51.549000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 10 02:16:51.549000 audit: PATH item=0 name="/dev/fd/63" inode=29415 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:16:51.549000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 10 02:16:51.565000 audit[3741]: AVC avc: denied { write } for pid=3741 comm="tee" name="fd" dev="proc" ino=30392 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:16:51.565000 audit[3741]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcd3b799f7 a2=241 a3=1b6 items=1 ppid=3625 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:51.565000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 10 02:16:51.565000 audit: PATH item=0 name="/dev/fd/63" inode=30387 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:16:51.565000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 10 02:16:51.684856 env[1300]: time="2025-05-10T02:16:51.684770166Z" level=info msg="CreateContainer within sandbox \"dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"602da787563027769231b935d46d51cd81a902a6d51a23950aa087e4db10c540\"" May 10 02:16:51.686406 env[1300]: time="2025-05-10T02:16:51.686359066Z" level=info msg="StartContainer for \"602da787563027769231b935d46d51cd81a902a6d51a23950aa087e4db10c540\"" May 10 02:16:51.935923 env[1300]: time="2025-05-10T02:16:51.935847779Z" level=info msg="StartContainer for \"602da787563027769231b935d46d51cd81a902a6d51a23950aa087e4db10c540\" returns successfully" May 10 02:16:52.199900 env[1300]: 2025-05-10 02:16:51.975 [INFO][3759] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" May 10 02:16:52.199900 env[1300]: 2025-05-10 02:16:51.978 [INFO][3759] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" iface="eth0" netns="/var/run/netns/cni-477b0027-6201-6aec-2abb-d1c4b3d7e74e" May 10 02:16:52.199900 env[1300]: 2025-05-10 02:16:51.978 [INFO][3759] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" iface="eth0" netns="/var/run/netns/cni-477b0027-6201-6aec-2abb-d1c4b3d7e74e" May 10 02:16:52.199900 env[1300]: 2025-05-10 02:16:51.978 [INFO][3759] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" iface="eth0" netns="/var/run/netns/cni-477b0027-6201-6aec-2abb-d1c4b3d7e74e" May 10 02:16:52.199900 env[1300]: 2025-05-10 02:16:51.978 [INFO][3759] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" May 10 02:16:52.199900 env[1300]: 2025-05-10 02:16:51.978 [INFO][3759] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" May 10 02:16:52.199900 env[1300]: 2025-05-10 02:16:52.167 [INFO][3806] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" HandleID="k8s-pod-network.8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:16:52.199900 env[1300]: 2025-05-10 02:16:52.168 [INFO][3806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:16:52.199900 env[1300]: 2025-05-10 02:16:52.168 [INFO][3806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:16:52.199900 env[1300]: 2025-05-10 02:16:52.186 [WARNING][3806] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" HandleID="k8s-pod-network.8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:16:52.199900 env[1300]: 2025-05-10 02:16:52.186 [INFO][3806] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" HandleID="k8s-pod-network.8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:16:52.199900 env[1300]: 2025-05-10 02:16:52.190 [INFO][3806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:16:52.199900 env[1300]: 2025-05-10 02:16:52.192 [INFO][3759] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" May 10 02:16:52.205513 systemd[1]: run-netns-cni\x2d477b0027\x2d6201\x2d6aec\x2d2abb\x2dd1c4b3d7e74e.mount: Deactivated successfully. May 10 02:16:52.211429 env[1300]: time="2025-05-10T02:16:52.211119902Z" level=info msg="TearDown network for sandbox \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\" successfully" May 10 02:16:52.211429 env[1300]: time="2025-05-10T02:16:52.211178745Z" level=info msg="StopPodSandbox for \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\" returns successfully" May 10 02:16:52.214103 env[1300]: time="2025-05-10T02:16:52.214061570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9c5fc9f8-m6gc5,Uid:26d91028-bb97-4013-8daa-28d750dcefc1,Namespace:calico-system,Attempt:1,}" May 10 02:16:52.447096 systemd-networkd[1074]: cali8d86b0ea5b4: Gained IPv6LL May 10 02:16:52.533961 env[1300]: time="2025-05-10T02:16:52.533875113Z" level=info msg="StopPodSandbox for \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\"" May 10 02:16:52.545736 env[1300]: time="2025-05-10T02:16:52.534499353Z" level=info msg="StopPodSandbox for \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\"" May 10 02:16:52.546183 env[1300]: time="2025-05-10T02:16:52.536418691Z" level=info msg="StopPodSandbox for \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\"" May 10 02:16:52.811393 systemd-networkd[1074]: cali37a65ea8dcd: Link UP May 10 02:16:52.814717 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 10 02:16:52.814922 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali37a65ea8dcd: link becomes ready May 10 02:16:52.815134 systemd-networkd[1074]: cali37a65ea8dcd: Gained carrier May 10 02:16:52.866000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.866000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.866000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.866000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.866000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.866000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.866000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.866000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.866000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.866000 audit: BPF prog-id=10 op=LOAD May 10 02:16:52.866000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe6b37d320 a2=98 a3=3 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:52.866000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:52.871000 audit: BPF prog-id=10 op=UNLOAD May 10 02:16:52.876000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.876000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.876000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.876000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.876000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.876000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.876000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.876000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.876000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.876000 audit: BPF prog-id=11 op=LOAD May 10 02:16:52.876000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe6b37d100 a2=74 a3=540051 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:52.876000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:52.880000 audit: BPF prog-id=11 op=UNLOAD May 10 02:16:52.880000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.880000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.880000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.880000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.880000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.880000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.880000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.880000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.880000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:52.880000 audit: BPF prog-id=12 op=LOAD May 10 02:16:52.880000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe6b37d130 a2=94 a3=2 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:52.880000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:52.881000 audit: BPF prog-id=12 op=UNLOAD May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.406 [INFO][3833] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0 calico-kube-controllers-6d9c5fc9f8- calico-system 26d91028-bb97-4013-8daa-28d750dcefc1 807 0 2025-05-10 02:16:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d9c5fc9f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-it8yl.gb1.brightbox.com calico-kube-controllers-6d9c5fc9f8-m6gc5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali37a65ea8dcd [] []}} ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Namespace="calico-system" Pod="calico-kube-controllers-6d9c5fc9f8-m6gc5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-" May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.407 [INFO][3833] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Namespace="calico-system" Pod="calico-kube-controllers-6d9c5fc9f8-m6gc5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.584 [INFO][3857] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" HandleID="k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.638 [INFO][3857] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" HandleID="k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000505d0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-it8yl.gb1.brightbox.com", "pod":"calico-kube-controllers-6d9c5fc9f8-m6gc5", "timestamp":"2025-05-10 02:16:52.58396421 +0000 UTC"}, Hostname:"srv-it8yl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.639 [INFO][3857] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.639 [INFO][3857] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.639 [INFO][3857] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-it8yl.gb1.brightbox.com' May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.643 [INFO][3857] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.678 [INFO][3857] ipam/ipam.go 372: Looking up existing affinities for host host="srv-it8yl.gb1.brightbox.com" May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.703 [INFO][3857] ipam/ipam.go 489: Trying affinity for 192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.709 [INFO][3857] ipam/ipam.go 155: Attempting to load block cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.727 [INFO][3857] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.727 [INFO][3857] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.729 [INFO][3857] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0 May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.739 [INFO][3857] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.763 [INFO][3857] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.123.130/26] block=192.168.123.128/26 handle="k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.763 [INFO][3857] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.123.130/26] handle="k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.763 [INFO][3857] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:16:52.887955 env[1300]: 2025-05-10 02:16:52.764 [INFO][3857] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.130/26] IPv6=[] ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" HandleID="k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:16:52.889170 env[1300]: 2025-05-10 02:16:52.785 [INFO][3833] cni-plugin/k8s.go 386: Populated endpoint ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Namespace="calico-system" Pod="calico-kube-controllers-6d9c5fc9f8-m6gc5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0", GenerateName:"calico-kube-controllers-6d9c5fc9f8-", Namespace:"calico-system", SelfLink:"", UID:"26d91028-bb97-4013-8daa-28d750dcefc1", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9c5fc9f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-6d9c5fc9f8-m6gc5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37a65ea8dcd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:16:52.889170 env[1300]: 2025-05-10 02:16:52.785 [INFO][3833] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.123.130/32] ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Namespace="calico-system" Pod="calico-kube-controllers-6d9c5fc9f8-m6gc5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:16:52.889170 env[1300]: 2025-05-10 02:16:52.785 [INFO][3833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37a65ea8dcd ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Namespace="calico-system" Pod="calico-kube-controllers-6d9c5fc9f8-m6gc5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:16:52.889170 env[1300]: 2025-05-10 02:16:52.814 [INFO][3833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Namespace="calico-system" Pod="calico-kube-controllers-6d9c5fc9f8-m6gc5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:16:52.889170 env[1300]: 2025-05-10 02:16:52.822 [INFO][3833] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Namespace="calico-system" Pod="calico-kube-controllers-6d9c5fc9f8-m6gc5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0", GenerateName:"calico-kube-controllers-6d9c5fc9f8-", Namespace:"calico-system", SelfLink:"", UID:"26d91028-bb97-4013-8daa-28d750dcefc1", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9c5fc9f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0", Pod:"calico-kube-controllers-6d9c5fc9f8-m6gc5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37a65ea8dcd", MAC:"e6:79:26:ee:2f:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:16:52.889170 env[1300]: 2025-05-10 02:16:52.864 [INFO][3833] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Namespace="calico-system" Pod="calico-kube-controllers-6d9c5fc9f8-m6gc5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:16:52.935775 kubelet[2271]: I0510 02:16:52.934524 2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tzgf6" podStartSLOduration=39.934472757 podStartE2EDuration="39.934472757s" podCreationTimestamp="2025-05-10 02:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 02:16:52.902058128 +0000 UTC m=+52.685239654" watchObservedRunningTime="2025-05-10 02:16:52.934472757 +0000 UTC m=+52.717654278" May 10 02:16:53.057000 audit[3964]: NETFILTER_CFG table=filter:97 family=2 entries=13 op=nft_register_rule pid=3964 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:53.057000 audit[3964]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc56fdc0c0 a2=0 a3=7ffc56fdc0ac items=0 ppid=2455 pid=3964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.057000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:53.071000 audit[3964]: NETFILTER_CFG table=nat:98 family=2 entries=35 op=nft_register_chain pid=3964 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:53.071000 audit[3964]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc56fdc0c0 a2=0 a3=7ffc56fdc0ac items=0 ppid=2455 pid=3964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.071000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:53.083566 env[1300]: time="2025-05-10T02:16:53.076879515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 02:16:53.084375 env[1300]: time="2025-05-10T02:16:53.084302798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 02:16:53.084746 env[1300]: time="2025-05-10T02:16:53.084688360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 02:16:53.095936 env[1300]: time="2025-05-10T02:16:53.095861859Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0 pid=3958 runtime=io.containerd.runc.v2 May 10 02:16:53.169000 audit[3993]: NETFILTER_CFG table=filter:99 family=2 entries=10 op=nft_register_rule pid=3993 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:53.169000 audit[3993]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc564d62d0 a2=0 a3=7ffc564d62bc items=0 ppid=2455 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:53.174000 audit[3993]: NETFILTER_CFG table=nat:100 family=2 entries=20 op=nft_register_rule pid=3993 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:53.174000 audit[3993]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc564d62d0 a2=0 a3=7ffc564d62bc items=0 ppid=2455 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.174000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:53.205398 systemd[1]: run-containerd-runc-k8s.io-61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0-runc.vy4Zks.mount: Deactivated successfully. May 10 02:16:53.435564 env[1300]: 2025-05-10 02:16:53.090 [INFO][3915] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" May 10 02:16:53.435564 env[1300]: 2025-05-10 02:16:53.090 [INFO][3915] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" iface="eth0" netns="/var/run/netns/cni-d536928d-d4a5-4bea-6c7e-830e0406071e" May 10 02:16:53.435564 env[1300]: 2025-05-10 02:16:53.091 [INFO][3915] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" iface="eth0" netns="/var/run/netns/cni-d536928d-d4a5-4bea-6c7e-830e0406071e" May 10 02:16:53.435564 env[1300]: 2025-05-10 02:16:53.091 [INFO][3915] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" iface="eth0" netns="/var/run/netns/cni-d536928d-d4a5-4bea-6c7e-830e0406071e" May 10 02:16:53.435564 env[1300]: 2025-05-10 02:16:53.091 [INFO][3915] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" May 10 02:16:53.435564 env[1300]: 2025-05-10 02:16:53.091 [INFO][3915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" May 10 02:16:53.435564 env[1300]: 2025-05-10 02:16:53.404 [INFO][3975] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" HandleID="k8s-pod-network.6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" May 10 02:16:53.435564 env[1300]: 2025-05-10 02:16:53.405 [INFO][3975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:16:53.435564 env[1300]: 2025-05-10 02:16:53.405 [INFO][3975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:16:53.435564 env[1300]: 2025-05-10 02:16:53.424 [WARNING][3975] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" HandleID="k8s-pod-network.6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" May 10 02:16:53.435564 env[1300]: 2025-05-10 02:16:53.424 [INFO][3975] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" HandleID="k8s-pod-network.6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" May 10 02:16:53.435564 env[1300]: 2025-05-10 02:16:53.426 [INFO][3975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:16:53.435564 env[1300]: 2025-05-10 02:16:53.433 [INFO][3915] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" May 10 02:16:53.440000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.440794 systemd[1]: run-netns-cni\x2dd536928d\x2dd4a5\x2d4bea\x2d6c7e\x2d830e0406071e.mount: Deactivated successfully. May 10 02:16:53.440000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.440000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.440000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.440000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.440000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.440000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.440000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.440000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.440000 audit: BPF prog-id=13 op=LOAD May 10 02:16:53.440000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe6b37cff0 a2=40 a3=1 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.440000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.442000 audit: BPF prog-id=13 op=UNLOAD May 10 02:16:53.442000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.442000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe6b37d0c0 a2=50 a3=7ffe6b37d1a0 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.442000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.465958 env[1300]: time="2025-05-10T02:16:53.465193645Z" level=info msg="TearDown network for sandbox \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\" successfully" May 10 02:16:53.465958 env[1300]: time="2025-05-10T02:16:53.465263879Z" level=info msg="StopPodSandbox for \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\" returns successfully" May 10 02:16:53.467443 env[1300]: time="2025-05-10T02:16:53.467062164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6qgjr,Uid:ab904080-8c4d-457a-bf1e-8c5d5229dc67,Namespace:kube-system,Attempt:1,}" May 10 02:16:53.469869 env[1300]: time="2025-05-10T02:16:53.469831057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9c5fc9f8-m6gc5,Uid:26d91028-bb97-4013-8daa-28d750dcefc1,Namespace:calico-system,Attempt:1,} returns sandbox id \"61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0\"" May 10 02:16:53.480966 env[1300]: time="2025-05-10T02:16:53.480927785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 10 02:16:53.481000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.481000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe6b37d000 a2=28 a3=0 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.481000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.482000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.482000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe6b37d030 a2=28 a3=0 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.482000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.483000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.483000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe6b37cf40 a2=28 a3=0 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.483000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.483000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.483000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe6b37d050 a2=28 a3=0 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.483000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.486000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.486000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe6b37d030 a2=28 a3=0 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.486000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.486000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.486000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe6b37d020 a2=28 a3=0 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.486000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.486000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.486000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe6b37d050 a2=28 a3=0 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.486000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.486000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.486000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe6b37d030 a2=28 a3=0 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.486000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.486000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.486000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe6b37d050 a2=28 a3=0 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.486000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.487000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.487000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe6b37d020 a2=28 a3=0 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.487000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.487000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.487000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe6b37d090 a2=28 a3=0 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.487000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.487000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.487000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe6b37ce40 a2=50 a3=1 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.487000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.487000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.487000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.487000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.487000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.487000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.487000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.487000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.487000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.487000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.487000 audit: BPF prog-id=14 op=LOAD May 10 02:16:53.487000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe6b37ce40 a2=94 a3=5 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.487000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.487000 audit: BPF prog-id=14 op=UNLOAD May 10 02:16:53.488000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.488000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe6b37cef0 a2=50 a3=1 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.488000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.488000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.488000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe6b37d010 a2=4 a3=38 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.488000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.488000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.488000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.488000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.488000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.488000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.488000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.488000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.488000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.488000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.488000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.488000 audit[3933]: AVC avc: denied { confidentiality } for pid=3933 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 10 02:16:53.488000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe6b37d060 a2=94 a3=6 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.488000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.489000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { confidentiality } for pid=3933 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 10 02:16:53.489000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe6b37c810 a2=94 a3=83 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.489000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.489000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { perfmon } for pid=3933 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { bpf } for pid=3933 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.489000 audit[3933]: AVC avc: denied { confidentiality } for pid=3933 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 10 02:16:53.489000 audit[3933]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe6b37c810 a2=94 a3=83 items=0 ppid=3620 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.489000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:16:53.528807 env[1300]: 2025-05-10 02:16:53.128 [INFO][3916] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" May 10 02:16:53.528807 env[1300]: 2025-05-10 02:16:53.128 [INFO][3916] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" iface="eth0" netns="/var/run/netns/cni-21d1935e-8fe0-d70c-7393-cadbb32f6229" May 10 02:16:53.528807 env[1300]: 2025-05-10 02:16:53.128 [INFO][3916] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" iface="eth0" netns="/var/run/netns/cni-21d1935e-8fe0-d70c-7393-cadbb32f6229" May 10 02:16:53.528807 env[1300]: 2025-05-10 02:16:53.129 [INFO][3916] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" iface="eth0" netns="/var/run/netns/cni-21d1935e-8fe0-d70c-7393-cadbb32f6229" May 10 02:16:53.528807 env[1300]: 2025-05-10 02:16:53.129 [INFO][3916] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" May 10 02:16:53.528807 env[1300]: 2025-05-10 02:16:53.130 [INFO][3916] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" May 10 02:16:53.528807 env[1300]: 2025-05-10 02:16:53.458 [INFO][3986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" HandleID="k8s-pod-network.8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" Workload="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" May 10 02:16:53.528807 env[1300]: 2025-05-10 02:16:53.459 [INFO][3986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:16:53.528807 env[1300]: 2025-05-10 02:16:53.507 [INFO][3986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:16:53.528807 env[1300]: 2025-05-10 02:16:53.520 [WARNING][3986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" HandleID="k8s-pod-network.8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" Workload="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" May 10 02:16:53.528807 env[1300]: 2025-05-10 02:16:53.520 [INFO][3986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" HandleID="k8s-pod-network.8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" Workload="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" May 10 02:16:53.528807 env[1300]: 2025-05-10 02:16:53.524 [INFO][3986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:16:53.528807 env[1300]: 2025-05-10 02:16:53.526 [INFO][3916] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" May 10 02:16:53.531989 env[1300]: time="2025-05-10T02:16:53.531943857Z" level=info msg="TearDown network for sandbox \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\" successfully" May 10 02:16:53.532140 env[1300]: time="2025-05-10T02:16:53.532105535Z" level=info msg="StopPodSandbox for \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\" returns successfully" May 10 02:16:53.532385 env[1300]: 2025-05-10 02:16:53.126 [INFO][3914] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" May 10 02:16:53.532385 env[1300]: 2025-05-10 02:16:53.127 [INFO][3914] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" iface="eth0" netns="/var/run/netns/cni-5d3763b4-e2f9-33e0-44be-6583c00909f8" May 10 02:16:53.532385 env[1300]: 2025-05-10 02:16:53.127 [INFO][3914] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" iface="eth0" netns="/var/run/netns/cni-5d3763b4-e2f9-33e0-44be-6583c00909f8" May 10 02:16:53.532385 env[1300]: 2025-05-10 02:16:53.129 [INFO][3914] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" iface="eth0" netns="/var/run/netns/cni-5d3763b4-e2f9-33e0-44be-6583c00909f8" May 10 02:16:53.532385 env[1300]: 2025-05-10 02:16:53.129 [INFO][3914] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" May 10 02:16:53.532385 env[1300]: 2025-05-10 02:16:53.129 [INFO][3914] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" May 10 02:16:53.532385 env[1300]: 2025-05-10 02:16:53.422 [INFO][3987] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" HandleID="k8s-pod-network.c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" May 10 02:16:53.532385 env[1300]: 2025-05-10 02:16:53.423 [INFO][3987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:16:53.532385 env[1300]: 2025-05-10 02:16:53.427 [INFO][3987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:16:53.532385 env[1300]: 2025-05-10 02:16:53.486 [WARNING][3987] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" HandleID="k8s-pod-network.c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" May 10 02:16:53.532385 env[1300]: 2025-05-10 02:16:53.486 [INFO][3987] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" HandleID="k8s-pod-network.c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" May 10 02:16:53.532385 env[1300]: 2025-05-10 02:16:53.507 [INFO][3987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:16:53.532385 env[1300]: 2025-05-10 02:16:53.529 [INFO][3914] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" May 10 02:16:53.535165 env[1300]: time="2025-05-10T02:16:53.535091569Z" level=info msg="TearDown network for sandbox \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\" successfully" May 10 02:16:53.535468 env[1300]: time="2025-05-10T02:16:53.535419879Z" level=info msg="StopPodSandbox for \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\" returns successfully" May 10 02:16:53.536108 env[1300]: time="2025-05-10T02:16:53.535738848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-msjzq,Uid:b62d1657-f00b-4957-b89b-113ee88c8696,Namespace:calico-system,Attempt:1,}" May 10 02:16:53.536518 env[1300]: time="2025-05-10T02:16:53.536480847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d5855849-n77bg,Uid:d0632dba-48ae-4acf-8c28-096b5737e007,Namespace:calico-apiserver,Attempt:1,}" May 10 02:16:53.630000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.630000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.630000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.630000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.630000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.630000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.630000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.630000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.630000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.630000 audit: BPF prog-id=15 op=LOAD May 10 02:16:53.630000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe6ed3b640 a2=98 a3=1999999999999999 items=0 ppid=3620 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.630000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 10 02:16:53.631000 audit: BPF prog-id=15 op=UNLOAD May 10 02:16:53.631000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit: BPF prog-id=16 op=LOAD May 10 02:16:53.631000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe6ed3b520 a2=74 a3=ffff items=0 ppid=3620 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.631000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 10 02:16:53.631000 audit: BPF prog-id=16 op=UNLOAD May 10 02:16:53.631000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.631000 audit: BPF prog-id=17 op=LOAD May 10 02:16:53.631000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe6ed3b560 a2=40 a3=7ffe6ed3b740 items=0 ppid=3620 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.631000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 10 02:16:53.631000 audit: BPF prog-id=17 op=UNLOAD May 10 02:16:53.821092 systemd-networkd[1074]: vxlan.calico: Link UP May 10 02:16:53.821106 systemd-networkd[1074]: vxlan.calico: Gained carrier May 10 02:16:53.940074 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 10 02:16:53.940294 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali67b4013eb85: link becomes ready May 10 02:16:53.944262 systemd-networkd[1074]: cali67b4013eb85: Link UP May 10 02:16:53.944894 systemd-networkd[1074]: cali67b4013eb85: Gained carrier May 10 02:16:53.948000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.948000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.948000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.948000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.948000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.948000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.948000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.948000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.948000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.948000 audit: BPF prog-id=18 op=LOAD May 10 02:16:53.948000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0c570170 a2=98 a3=100 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.948000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.948000 audit: BPF prog-id=18 op=UNLOAD May 10 02:16:53.949000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit: BPF prog-id=19 op=LOAD May 10 02:16:53.949000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0c56ff80 a2=74 a3=540051 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.949000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.949000 audit: BPF prog-id=19 op=UNLOAD May 10 02:16:53.949000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit: BPF prog-id=20 op=LOAD May 10 02:16:53.949000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0c56ffb0 a2=94 a3=2 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.949000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.949000 audit: BPF prog-id=20 op=UNLOAD May 10 02:16:53.949000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd0c56fe80 a2=28 a3=0 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.949000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.949000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.949000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd0c56feb0 a2=28 a3=0 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.949000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd0c56fdc0 a2=28 a3=0 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.950000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd0c56fed0 a2=28 a3=0 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.950000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd0c56feb0 a2=28 a3=0 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.950000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd0c56fea0 a2=28 a3=0 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.950000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd0c56fed0 a2=28 a3=0 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.950000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd0c56feb0 a2=28 a3=0 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.950000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd0c56fed0 a2=28 a3=0 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.950000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd0c56fea0 a2=28 a3=0 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.950000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd0c56ff10 a2=28 a3=0 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.950000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.950000 audit: BPF prog-id=21 op=LOAD May 10 02:16:53.950000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd0c56fd80 a2=40 a3=0 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.950000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.950000 audit: BPF prog-id=21 op=UNLOAD May 10 02:16:53.951000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.951000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffd0c56fd70 a2=50 a3=2800 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.951000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffd0c56fd70 a2=50 a3=2800 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.952000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit: BPF prog-id=22 op=LOAD May 10 02:16:53.952000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd0c56f590 a2=94 a3=2 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.952000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.952000 audit: BPF prog-id=22 op=UNLOAD May 10 02:16:53.952000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { perfmon } for pid=4113 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit[4113]: AVC avc: denied { bpf } for pid=4113 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:53.952000 audit: BPF prog-id=23 op=LOAD May 10 02:16:53.952000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd0c56f690 a2=94 a3=30 items=0 ppid=3620 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:53.952000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.597 [INFO][4024] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0 coredns-7db6d8ff4d- kube-system ab904080-8c4d-457a-bf1e-8c5d5229dc67 824 0 2025-05-10 02:16:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-it8yl.gb1.brightbox.com coredns-7db6d8ff4d-6qgjr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali67b4013eb85 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6qgjr" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-" May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.597 [INFO][4024] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6qgjr" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.826 [INFO][4051] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" HandleID="k8s-pod-network.a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.858 [INFO][4051] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" HandleID="k8s-pod-network.a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e5df0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-it8yl.gb1.brightbox.com", "pod":"coredns-7db6d8ff4d-6qgjr", "timestamp":"2025-05-10 02:16:53.826082241 +0000 UTC"}, Hostname:"srv-it8yl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.858 [INFO][4051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.858 [INFO][4051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.858 [INFO][4051] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-it8yl.gb1.brightbox.com' May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.864 [INFO][4051] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.870 [INFO][4051] ipam/ipam.go 372: Looking up existing affinities for host host="srv-it8yl.gb1.brightbox.com" May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.880 [INFO][4051] ipam/ipam.go 489: Trying affinity for 192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.885 [INFO][4051] ipam/ipam.go 155: Attempting to load block cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.889 [INFO][4051] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.889 [INFO][4051] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.891 [INFO][4051] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7 May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.899 [INFO][4051] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.916 [INFO][4051] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.123.131/26] block=192.168.123.128/26 handle="k8s-pod-network.a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.916 [INFO][4051] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.123.131/26] handle="k8s-pod-network.a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.916 [INFO][4051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:16:53.987115 env[1300]: 2025-05-10 02:16:53.916 [INFO][4051] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.131/26] IPv6=[] ContainerID="a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" HandleID="k8s-pod-network.a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" May 10 02:16:53.988511 env[1300]: 2025-05-10 02:16:53.924 [INFO][4024] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6qgjr" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ab904080-8c4d-457a-bf1e-8c5d5229dc67", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7db6d8ff4d-6qgjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67b4013eb85", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:16:53.988511 env[1300]: 2025-05-10 02:16:53.924 [INFO][4024] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.123.131/32] ContainerID="a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6qgjr" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" May 10 02:16:53.988511 env[1300]: 2025-05-10 02:16:53.924 [INFO][4024] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67b4013eb85 ContainerID="a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6qgjr" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" May 10 02:16:53.988511 env[1300]: 2025-05-10 02:16:53.944 [INFO][4024] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6qgjr" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" May 10 02:16:53.988511 env[1300]: 2025-05-10 02:16:53.945 [INFO][4024] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6qgjr" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ab904080-8c4d-457a-bf1e-8c5d5229dc67", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7", Pod:"coredns-7db6d8ff4d-6qgjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67b4013eb85", MAC:"72:59:fc:f5:13:ad", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:16:53.988511 env[1300]: 2025-05-10 02:16:53.973 [INFO][4024] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6qgjr" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" May 10 02:16:54.006000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.006000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.006000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.006000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.006000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.006000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.006000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.006000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.006000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.006000 audit: BPF prog-id=24 op=LOAD May 10 02:16:54.006000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd7a8ac780 a2=98 a3=0 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.006000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.009000 audit: BPF prog-id=24 op=UNLOAD May 10 02:16:54.009000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.009000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.009000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.009000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.009000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.009000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.009000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.009000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.009000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.009000 audit: BPF prog-id=25 op=LOAD May 10 02:16:54.009000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd7a8ac560 a2=74 a3=540051 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.009000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.011000 audit: BPF prog-id=25 op=UNLOAD May 10 02:16:54.011000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.011000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.011000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.011000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.011000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.011000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.011000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.011000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.011000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.011000 audit: BPF prog-id=26 op=LOAD May 10 02:16:54.011000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd7a8ac590 a2=94 a3=2 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.011000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.011000 audit: BPF prog-id=26 op=UNLOAD May 10 02:16:54.071462 systemd-networkd[1074]: calic1860a1d3f5: Link UP May 10 02:16:54.075715 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic1860a1d3f5: link becomes ready May 10 02:16:54.076318 systemd-networkd[1074]: calic1860a1d3f5: Gained carrier May 10 02:16:54.112598 systemd[1]: run-netns-cni\x2d21d1935e\x2d8fe0\x2dd70c\x2d7393\x2dcadbb32f6229.mount: Deactivated successfully. May 10 02:16:54.112851 systemd[1]: run-netns-cni\x2d5d3763b4\x2de2f9\x2d33e0\x2d44be\x2d6583c00909f8.mount: Deactivated successfully. May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:53.711 [INFO][4038] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0 calico-apiserver-67d5855849- calico-apiserver d0632dba-48ae-4acf-8c28-096b5737e007 826 0 2025-05-10 02:16:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67d5855849 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-it8yl.gb1.brightbox.com calico-apiserver-67d5855849-n77bg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic1860a1d3f5 [] []}} ContainerID="bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" Namespace="calico-apiserver" Pod="calico-apiserver-67d5855849-n77bg" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-" May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:53.711 [INFO][4038] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" Namespace="calico-apiserver" Pod="calico-apiserver-67d5855849-n77bg" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:53.926 [INFO][4083] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" HandleID="k8s-pod-network.bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:53.995 [INFO][4083] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" HandleID="k8s-pod-network.bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e4490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-it8yl.gb1.brightbox.com", "pod":"calico-apiserver-67d5855849-n77bg", "timestamp":"2025-05-10 02:16:53.926410823 +0000 UTC"}, Hostname:"srv-it8yl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:53.995 [INFO][4083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:53.995 [INFO][4083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:53.995 [INFO][4083] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-it8yl.gb1.brightbox.com' May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:54.002 [INFO][4083] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:54.016 [INFO][4083] ipam/ipam.go 372: Looking up existing affinities for host host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:54.024 [INFO][4083] ipam/ipam.go 489: Trying affinity for 192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:54.028 [INFO][4083] ipam/ipam.go 155: Attempting to load block cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:54.032 [INFO][4083] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:54.032 [INFO][4083] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:54.035 [INFO][4083] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418 May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:54.043 [INFO][4083] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:54.061 [INFO][4083] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.123.132/26] block=192.168.123.128/26 handle="k8s-pod-network.bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:54.061 [INFO][4083] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.123.132/26] handle="k8s-pod-network.bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:54.061 [INFO][4083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:16:54.122773 env[1300]: 2025-05-10 02:16:54.061 [INFO][4083] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.132/26] IPv6=[] ContainerID="bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" HandleID="k8s-pod-network.bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" May 10 02:16:54.124354 env[1300]: 2025-05-10 02:16:54.064 [INFO][4038] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" Namespace="calico-apiserver" Pod="calico-apiserver-67d5855849-n77bg" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0", GenerateName:"calico-apiserver-67d5855849-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0632dba-48ae-4acf-8c28-096b5737e007", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d5855849", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-67d5855849-n77bg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1860a1d3f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:16:54.124354 env[1300]: 2025-05-10 02:16:54.064 [INFO][4038] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.123.132/32] ContainerID="bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" Namespace="calico-apiserver" Pod="calico-apiserver-67d5855849-n77bg" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" May 10 02:16:54.124354 env[1300]: 2025-05-10 02:16:54.064 [INFO][4038] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1860a1d3f5 ContainerID="bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" Namespace="calico-apiserver" Pod="calico-apiserver-67d5855849-n77bg" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" May 10 02:16:54.124354 env[1300]: 2025-05-10 02:16:54.075 [INFO][4038] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" Namespace="calico-apiserver" Pod="calico-apiserver-67d5855849-n77bg" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" May 10 02:16:54.124354 env[1300]: 2025-05-10 02:16:54.075 [INFO][4038] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" Namespace="calico-apiserver" Pod="calico-apiserver-67d5855849-n77bg" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0", GenerateName:"calico-apiserver-67d5855849-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0632dba-48ae-4acf-8c28-096b5737e007", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d5855849", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418", Pod:"calico-apiserver-67d5855849-n77bg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1860a1d3f5", MAC:"ee:4b:52:b5:9f:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:16:54.124354 env[1300]: 2025-05-10 02:16:54.093 [INFO][4038] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418" Namespace="calico-apiserver" Pod="calico-apiserver-67d5855849-n77bg" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" May 10 02:16:54.142066 env[1300]: time="2025-05-10T02:16:54.141963879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 02:16:54.142318 env[1300]: time="2025-05-10T02:16:54.142272646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 02:16:54.142579 env[1300]: time="2025-05-10T02:16:54.142533856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 02:16:54.149849 env[1300]: time="2025-05-10T02:16:54.149769035Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7 pid=4146 runtime=io.containerd.runc.v2 May 10 02:16:54.209933 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic1d5e5987e2: link becomes ready May 10 02:16:54.209054 systemd-networkd[1074]: calic1d5e5987e2: Link UP May 10 02:16:54.212817 systemd-networkd[1074]: calic1d5e5987e2: Gained carrier May 10 02:16:54.221410 env[1300]: time="2025-05-10T02:16:54.220973495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 02:16:54.221668 env[1300]: time="2025-05-10T02:16:54.221594611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 02:16:54.221935 env[1300]: time="2025-05-10T02:16:54.221811000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 02:16:54.223133 env[1300]: time="2025-05-10T02:16:54.223068966Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418 pid=4176 runtime=io.containerd.runc.v2 May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:53.720 [INFO][4035] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0 csi-node-driver- calico-system b62d1657-f00b-4957-b89b-113ee88c8696 825 0 2025-05-10 02:16:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-it8yl.gb1.brightbox.com csi-node-driver-msjzq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic1d5e5987e2 [] []}} ContainerID="b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" Namespace="calico-system" Pod="csi-node-driver-msjzq" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-" May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:53.723 [INFO][4035] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" Namespace="calico-system" Pod="csi-node-driver-msjzq" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:53.988 [INFO][4085] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" HandleID="k8s-pod-network.b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" Workload="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:54.010 [INFO][4085] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" HandleID="k8s-pod-network.b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" Workload="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011bca0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-it8yl.gb1.brightbox.com", "pod":"csi-node-driver-msjzq", "timestamp":"2025-05-10 02:16:53.988437362 +0000 UTC"}, Hostname:"srv-it8yl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:54.010 [INFO][4085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:54.061 [INFO][4085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:54.061 [INFO][4085] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-it8yl.gb1.brightbox.com' May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:54.065 [INFO][4085] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:54.077 [INFO][4085] ipam/ipam.go 372: Looking up existing affinities for host host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:54.123 [INFO][4085] ipam/ipam.go 489: Trying affinity for 192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:54.128 [INFO][4085] ipam/ipam.go 155: Attempting to load block cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:54.135 [INFO][4085] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:54.135 [INFO][4085] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:54.144 [INFO][4085] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4 May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:54.153 [INFO][4085] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:54.175 [INFO][4085] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.123.133/26] block=192.168.123.128/26 handle="k8s-pod-network.b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:54.175 [INFO][4085] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.123.133/26] handle="k8s-pod-network.b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" host="srv-it8yl.gb1.brightbox.com" May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:54.175 [INFO][4085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:16:54.250539 env[1300]: 2025-05-10 02:16:54.175 [INFO][4085] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.133/26] IPv6=[] ContainerID="b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" HandleID="k8s-pod-network.b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" Workload="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" May 10 02:16:54.252280 env[1300]: 2025-05-10 02:16:54.198 [INFO][4035] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" Namespace="calico-system" Pod="csi-node-driver-msjzq" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b62d1657-f00b-4957-b89b-113ee88c8696", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-msjzq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1d5e5987e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:16:54.252280 env[1300]: 2025-05-10 02:16:54.199 [INFO][4035] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.123.133/32] ContainerID="b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" Namespace="calico-system" Pod="csi-node-driver-msjzq" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" May 10 02:16:54.252280 env[1300]: 2025-05-10 02:16:54.199 [INFO][4035] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1d5e5987e2 ContainerID="b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" Namespace="calico-system" Pod="csi-node-driver-msjzq" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" May 10 02:16:54.252280 env[1300]: 2025-05-10 02:16:54.215 [INFO][4035] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" Namespace="calico-system" Pod="csi-node-driver-msjzq" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" May 10 02:16:54.252280 env[1300]: 2025-05-10 02:16:54.216 [INFO][4035] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" Namespace="calico-system" Pod="csi-node-driver-msjzq" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b62d1657-f00b-4957-b89b-113ee88c8696", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4", Pod:"csi-node-driver-msjzq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1d5e5987e2", MAC:"16:21:cd:a6:9a:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:16:54.252280 env[1300]: 2025-05-10 02:16:54.246 [INFO][4035] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" Namespace="calico-system" Pod="csi-node-driver-msjzq" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" May 10 02:16:54.286112 systemd[1]: run-containerd-runc-k8s.io-a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7-runc.AyOPEs.mount: Deactivated successfully. May 10 02:16:54.432726 env[1300]: time="2025-05-10T02:16:54.432615146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 02:16:54.435980 env[1300]: time="2025-05-10T02:16:54.435932044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6qgjr,Uid:ab904080-8c4d-457a-bf1e-8c5d5229dc67,Namespace:kube-system,Attempt:1,} returns sandbox id \"a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7\"" May 10 02:16:54.439301 env[1300]: time="2025-05-10T02:16:54.432948173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 02:16:54.440542 env[1300]: time="2025-05-10T02:16:54.440494422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 02:16:54.440956 env[1300]: time="2025-05-10T02:16:54.440895808Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4 pid=4238 runtime=io.containerd.runc.v2 May 10 02:16:54.445588 env[1300]: time="2025-05-10T02:16:54.445528361Z" level=info msg="CreateContainer within sandbox \"a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 02:16:54.483883 env[1300]: time="2025-05-10T02:16:54.483798421Z" level=info msg="CreateContainer within sandbox \"a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea78e9b72d26bbd506991bd9c35a3e3e956096f9ad7c02acfa1e156b3709ff00\"" May 10 02:16:54.485476 env[1300]: time="2025-05-10T02:16:54.485440505Z" level=info msg="StartContainer for \"ea78e9b72d26bbd506991bd9c35a3e3e956096f9ad7c02acfa1e156b3709ff00\"" May 10 02:16:54.489541 env[1300]: time="2025-05-10T02:16:54.489501522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d5855849-n77bg,Uid:d0632dba-48ae-4acf-8c28-096b5737e007,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418\"" May 10 02:16:54.511000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.511000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.511000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.511000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.511000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.511000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.511000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.511000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.511000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.511000 audit: BPF prog-id=27 op=LOAD May 10 02:16:54.511000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd7a8ac450 a2=40 a3=1 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.511000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.513000 audit: BPF prog-id=27 op=UNLOAD May 10 02:16:54.513000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.513000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd7a8ac520 a2=50 a3=7ffd7a8ac600 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.513000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.624000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.624000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd7a8ac460 a2=28 a3=0 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.624000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.624000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.624000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd7a8ac490 a2=28 a3=0 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.624000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.624000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.624000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd7a8ac3a0 a2=28 a3=0 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.624000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.624000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.624000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd7a8ac4b0 a2=28 a3=0 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.624000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.624000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.624000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd7a8ac490 a2=28 a3=0 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.624000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.624000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.624000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd7a8ac480 a2=28 a3=0 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.624000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.624000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.624000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd7a8ac4b0 a2=28 a3=0 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.624000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.624000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.624000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd7a8ac490 a2=28 a3=0 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.624000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.624000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.624000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd7a8ac4b0 a2=28 a3=0 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.624000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.624000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.624000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd7a8ac480 a2=28 a3=0 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.624000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.624000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.624000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd7a8ac4f0 a2=28 a3=0 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.624000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.633000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.633000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd7a8ac2a0 a2=50 a3=1 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.633000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.633000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.633000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.633000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.633000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.633000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.633000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.633000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.633000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.633000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.633000 audit: BPF prog-id=28 op=LOAD May 10 02:16:54.633000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd7a8ac2a0 a2=94 a3=5 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.633000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.635000 audit: BPF prog-id=28 op=UNLOAD May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd7a8ac350 a2=50 a3=1 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.635000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffd7a8ac470 a2=4 a3=38 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.635000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { confidentiality } for pid=4119 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 10 02:16:54.635000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd7a8ac4c0 a2=94 a3=6 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.635000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { confidentiality } for pid=4119 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 10 02:16:54.635000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd7a8abc70 a2=94 a3=83 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.635000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { perfmon } for pid=4119 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.635000 audit[4119]: AVC avc: denied { confidentiality } for pid=4119 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 10 02:16:54.635000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd7a8abc70 a2=94 a3=83 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.635000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.636000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.636000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd7a8ad6b0 a2=10 a3=f1f00800 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.636000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.636000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.636000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd7a8ad550 a2=10 a3=3 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.636000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.636000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.636000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd7a8ad4f0 a2=10 a3=3 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.636000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.636000 audit[4119]: AVC avc: denied { bpf } for pid=4119 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:16:54.636000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd7a8ad4f0 a2=10 a3=7 items=0 ppid=3620 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.636000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:16:54.653528 env[1300]: time="2025-05-10T02:16:54.653478563Z" level=info msg="StartContainer for \"ea78e9b72d26bbd506991bd9c35a3e3e956096f9ad7c02acfa1e156b3709ff00\" returns successfully" May 10 02:16:54.661000 audit: BPF prog-id=23 op=UNLOAD May 10 02:16:54.661000 audit[4246]: SYSCALL arch=c000003e syscall=437 success=yes exit=9 a0=8 a1=c0002e48c0 a2=c0001440a0 a3=18 items=1 ppid=4238 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:54.661000 audit: CWD cwd="/run/containerd/io.containerd.runtime.v2.task/k8s.io/b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4" May 10 02:16:54.661000 audit: PATH item=0 name="devices/kubepods/besteffort/podb62d1657-f00b-4957-b89b-113ee88c8696/b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4/devices.allow" inode=992 dev=00:25 mode=0100200 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:cgroup_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:16:54.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237383661363835613763616639373235616464376431383262633334 May 10 02:16:54.766571 systemd-networkd[1074]: cali37a65ea8dcd: Gained IPv6LL May 10 02:16:54.800044 env[1300]: time="2025-05-10T02:16:54.799807630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-msjzq,Uid:b62d1657-f00b-4957-b89b-113ee88c8696,Namespace:calico-system,Attempt:1,} returns sandbox id \"b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4\"" May 10 02:16:54.929734 kubelet[2271]: I0510 02:16:54.929610 2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6qgjr" podStartSLOduration=41.929539698 podStartE2EDuration="41.929539698s" podCreationTimestamp="2025-05-10 02:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 02:16:54.910079587 +0000 UTC m=+54.693261118" watchObservedRunningTime="2025-05-10 02:16:54.929539698 +0000 UTC m=+54.712721227" May 10 02:16:55.017000 audit[4343]: NETFILTER_CFG table=filter:101 family=2 entries=10 op=nft_register_rule pid=4343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:55.017000 audit[4343]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe91710300 a2=0 a3=7ffe917102ec items=0 ppid=2455 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:55.017000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:55.028000 audit[4343]: NETFILTER_CFG table=nat:102 family=2 entries=44 op=nft_register_rule pid=4343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:55.028000 audit[4343]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe91710300 a2=0 a3=7ffe917102ec items=0 ppid=2455 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:55.028000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:55.061000 audit[4351]: NETFILTER_CFG table=filter:103 family=2 entries=10 op=nft_register_rule pid=4351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:55.061000 audit[4351]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffec6c4eb40 a2=0 a3=7ffec6c4eb2c items=0 ppid=2455 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:55.061000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:55.085000 audit[4349]: NETFILTER_CFG table=mangle:104 family=2 entries=16 op=nft_register_chain pid=4349 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 10 02:16:55.086000 audit[4351]: NETFILTER_CFG table=nat:105 family=2 entries=56 op=nft_register_chain pid=4351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:16:55.086000 audit[4351]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffec6c4eb40 a2=0 a3=7ffec6c4eb2c items=0 ppid=2455 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:55.086000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:16:55.085000 audit[4349]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffde53053d0 a2=0 a3=7ffde53053bc items=0 ppid=3620 pid=4349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:55.085000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 10 02:16:55.095000 audit[4347]: NETFILTER_CFG table=raw:106 family=2 entries=21 op=nft_register_chain pid=4347 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 10 02:16:55.095000 audit[4347]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffde3c8e2c0 a2=0 a3=7ffde3c8e2ac items=0 ppid=3620 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:55.095000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 10 02:16:55.108000 audit[4350]: NETFILTER_CFG table=nat:107 family=2 entries=15 op=nft_register_chain pid=4350 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 10 02:16:55.108000 audit[4350]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffddb68e1c0 a2=0 a3=7ffddb68e1ac items=0 ppid=3620 pid=4350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:55.108000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 10 02:16:55.110000 audit[4356]: NETFILTER_CFG table=filter:108 family=2 entries=99 op=nft_register_chain pid=4356 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 10 02:16:55.110000 audit[4356]: SYSCALL arch=c000003e syscall=46 success=yes exit=53840 a0=3 a1=7fff7e8fd960 a2=0 a3=7fff7e8fd94c items=0 ppid=3620 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:55.110000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 10 02:16:55.134890 systemd-networkd[1074]: cali67b4013eb85: Gained IPv6LL May 10 02:16:55.171000 audit[4362]: NETFILTER_CFG table=filter:109 family=2 entries=102 op=nft_register_chain pid=4362 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 10 02:16:55.171000 audit[4362]: SYSCALL arch=c000003e syscall=46 success=yes exit=58592 a0=3 a1=7ffd48c43c30 a2=0 a3=7ffd48c43c1c items=0 ppid=3620 pid=4362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:16:55.171000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 10 02:16:55.519934 systemd-networkd[1074]: vxlan.calico: Gained IPv6LL May 10 02:16:55.520404 systemd-networkd[1074]: calic1860a1d3f5: Gained IPv6LL May 10 02:16:56.095071 systemd-networkd[1074]: calic1d5e5987e2: Gained IPv6LL May 10 02:16:57.259599 env[1300]: time="2025-05-10T02:16:57.259494623Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:57.261847 env[1300]: time="2025-05-10T02:16:57.261806054Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:57.264060 env[1300]: time="2025-05-10T02:16:57.264024544Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:57.265616 env[1300]: time="2025-05-10T02:16:57.265580341Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:16:57.266481 env[1300]: time="2025-05-10T02:16:57.266440838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 10 02:16:57.270316 env[1300]: time="2025-05-10T02:16:57.270283007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 02:16:57.344958 env[1300]: time="2025-05-10T02:16:57.344880239Z" level=info msg="CreateContainer within sandbox \"61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 10 02:16:57.412363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3236728767.mount: Deactivated successfully. May 10 02:16:57.417640 env[1300]: time="2025-05-10T02:16:57.417574681Z" level=info msg="CreateContainer within sandbox \"61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"991459f3270f0e0088ce6bf0b9879acad4af013408a99faaf796c87169795ef4\"" May 10 02:16:57.418555 env[1300]: time="2025-05-10T02:16:57.418317654Z" level=info msg="StartContainer for \"991459f3270f0e0088ce6bf0b9879acad4af013408a99faaf796c87169795ef4\"" May 10 02:16:57.566461 env[1300]: time="2025-05-10T02:16:57.566385436Z" level=info msg="StartContainer for \"991459f3270f0e0088ce6bf0b9879acad4af013408a99faaf796c87169795ef4\" returns successfully" May 10 02:16:57.936845 kubelet[2271]: I0510 02:16:57.936450 2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d9c5fc9f8-m6gc5" podStartSLOduration=34.144014688 podStartE2EDuration="37.93641138s" podCreationTimestamp="2025-05-10 02:16:20 +0000 UTC" firstStartedPulling="2025-05-10 02:16:53.475452443 +0000 UTC m=+53.258633964" lastFinishedPulling="2025-05-10 02:16:57.267849126 +0000 UTC m=+57.051030656" observedRunningTime="2025-05-10 02:16:57.933173546 +0000 UTC m=+57.716355077" watchObservedRunningTime="2025-05-10 02:16:57.93641138 +0000 UTC m=+57.719592908" May 10 02:17:00.502334 env[1300]: time="2025-05-10T02:17:00.502246385Z" level=info msg="StopPodSandbox for \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\"" May 10 02:17:00.860716 env[1300]: 2025-05-10 02:17:00.770 [WARNING][4445] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"033eccb2-2101-4733-a8e6-14ca25528bba", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36", Pod:"coredns-7db6d8ff4d-tzgf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d86b0ea5b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:00.860716 env[1300]: 2025-05-10 02:17:00.773 [INFO][4445] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" May 10 02:17:00.860716 env[1300]: 2025-05-10 02:17:00.773 [INFO][4445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" iface="eth0" netns="" May 10 02:17:00.860716 env[1300]: 2025-05-10 02:17:00.773 [INFO][4445] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" May 10 02:17:00.860716 env[1300]: 2025-05-10 02:17:00.773 [INFO][4445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" May 10 02:17:00.860716 env[1300]: 2025-05-10 02:17:00.840 [INFO][4453] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" HandleID="k8s-pod-network.ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" May 10 02:17:00.860716 env[1300]: 2025-05-10 02:17:00.840 [INFO][4453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:00.860716 env[1300]: 2025-05-10 02:17:00.840 [INFO][4453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:00.860716 env[1300]: 2025-05-10 02:17:00.852 [WARNING][4453] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" HandleID="k8s-pod-network.ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" May 10 02:17:00.860716 env[1300]: 2025-05-10 02:17:00.852 [INFO][4453] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" HandleID="k8s-pod-network.ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" May 10 02:17:00.860716 env[1300]: 2025-05-10 02:17:00.854 [INFO][4453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:00.860716 env[1300]: 2025-05-10 02:17:00.858 [INFO][4445] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" May 10 02:17:00.861808 env[1300]: time="2025-05-10T02:17:00.860768668Z" level=info msg="TearDown network for sandbox \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\" successfully" May 10 02:17:00.861808 env[1300]: time="2025-05-10T02:17:00.860812727Z" level=info msg="StopPodSandbox for \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\" returns successfully" May 10 02:17:00.869449 env[1300]: time="2025-05-10T02:17:00.869378019Z" level=info msg="RemovePodSandbox for \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\"" May 10 02:17:00.869554 env[1300]: time="2025-05-10T02:17:00.869454900Z" level=info msg="Forcibly stopping sandbox \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\"" May 10 02:17:01.076061 env[1300]: 2025-05-10 02:17:00.973 [WARNING][4473] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"033eccb2-2101-4733-a8e6-14ca25528bba", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"dbda28764aaaa8b37e6a07c57940f3001ae20aab96d8e45ba215ca3284018c36", Pod:"coredns-7db6d8ff4d-tzgf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d86b0ea5b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:01.076061 env[1300]: 2025-05-10 02:17:00.973 [INFO][4473] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" May 10 02:17:01.076061 env[1300]: 2025-05-10 02:17:00.974 [INFO][4473] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" iface="eth0" netns="" May 10 02:17:01.076061 env[1300]: 2025-05-10 02:17:00.974 [INFO][4473] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" May 10 02:17:01.076061 env[1300]: 2025-05-10 02:17:00.974 [INFO][4473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" May 10 02:17:01.076061 env[1300]: 2025-05-10 02:17:01.056 [INFO][4480] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" HandleID="k8s-pod-network.ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" May 10 02:17:01.076061 env[1300]: 2025-05-10 02:17:01.056 [INFO][4480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:01.076061 env[1300]: 2025-05-10 02:17:01.056 [INFO][4480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:01.076061 env[1300]: 2025-05-10 02:17:01.068 [WARNING][4480] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" HandleID="k8s-pod-network.ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" May 10 02:17:01.076061 env[1300]: 2025-05-10 02:17:01.068 [INFO][4480] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" HandleID="k8s-pod-network.ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--tzgf6-eth0" May 10 02:17:01.076061 env[1300]: 2025-05-10 02:17:01.070 [INFO][4480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:01.076061 env[1300]: 2025-05-10 02:17:01.072 [INFO][4473] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e" May 10 02:17:01.078149 env[1300]: time="2025-05-10T02:17:01.076779826Z" level=info msg="TearDown network for sandbox \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\" successfully" May 10 02:17:01.108437 env[1300]: time="2025-05-10T02:17:01.108394543Z" level=info msg="RemovePodSandbox \"ac0f69cf8d621b37dd0c59dfc5aa9cb6100de993bbd5cc0badd7c16bd24c130e\" returns successfully" May 10 02:17:01.111692 env[1300]: time="2025-05-10T02:17:01.111573621Z" level=info msg="StopPodSandbox for \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\"" May 10 02:17:01.262468 env[1300]: 2025-05-10 02:17:01.193 [WARNING][4498] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ab904080-8c4d-457a-bf1e-8c5d5229dc67", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7", Pod:"coredns-7db6d8ff4d-6qgjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67b4013eb85", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:01.262468 env[1300]: 2025-05-10 02:17:01.193 [INFO][4498] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" May 10 02:17:01.262468 env[1300]: 2025-05-10 02:17:01.193 [INFO][4498] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" iface="eth0" netns="" May 10 02:17:01.262468 env[1300]: 2025-05-10 02:17:01.193 [INFO][4498] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" May 10 02:17:01.262468 env[1300]: 2025-05-10 02:17:01.193 [INFO][4498] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" May 10 02:17:01.262468 env[1300]: 2025-05-10 02:17:01.245 [INFO][4505] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" HandleID="k8s-pod-network.6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" May 10 02:17:01.262468 env[1300]: 2025-05-10 02:17:01.246 [INFO][4505] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:01.262468 env[1300]: 2025-05-10 02:17:01.246 [INFO][4505] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:01.262468 env[1300]: 2025-05-10 02:17:01.254 [WARNING][4505] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" HandleID="k8s-pod-network.6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" May 10 02:17:01.262468 env[1300]: 2025-05-10 02:17:01.255 [INFO][4505] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" HandleID="k8s-pod-network.6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" May 10 02:17:01.262468 env[1300]: 2025-05-10 02:17:01.257 [INFO][4505] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:01.262468 env[1300]: 2025-05-10 02:17:01.260 [INFO][4498] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" May 10 02:17:01.264135 env[1300]: time="2025-05-10T02:17:01.262502309Z" level=info msg="TearDown network for sandbox \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\" successfully" May 10 02:17:01.264135 env[1300]: time="2025-05-10T02:17:01.262557625Z" level=info msg="StopPodSandbox for \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\" returns successfully" May 10 02:17:01.264135 env[1300]: time="2025-05-10T02:17:01.263236386Z" level=info msg="RemovePodSandbox for \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\"" May 10 02:17:01.264135 env[1300]: time="2025-05-10T02:17:01.263279593Z" level=info msg="Forcibly stopping sandbox \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\"" May 10 02:17:01.370885 env[1300]: 2025-05-10 02:17:01.320 [WARNING][4526] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ab904080-8c4d-457a-bf1e-8c5d5229dc67", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"a4d535368e21bd67125f99627f86d18961b11f3992ae703b58710e80bb722ca7", Pod:"coredns-7db6d8ff4d-6qgjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67b4013eb85", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:01.370885 env[1300]: 2025-05-10 02:17:01.320 [INFO][4526] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" May 10 02:17:01.370885 env[1300]: 2025-05-10 02:17:01.320 [INFO][4526] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" iface="eth0" netns="" May 10 02:17:01.370885 env[1300]: 2025-05-10 02:17:01.320 [INFO][4526] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" May 10 02:17:01.370885 env[1300]: 2025-05-10 02:17:01.320 [INFO][4526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" May 10 02:17:01.370885 env[1300]: 2025-05-10 02:17:01.354 [INFO][4533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" HandleID="k8s-pod-network.6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" May 10 02:17:01.370885 env[1300]: 2025-05-10 02:17:01.355 [INFO][4533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:01.370885 env[1300]: 2025-05-10 02:17:01.355 [INFO][4533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:01.370885 env[1300]: 2025-05-10 02:17:01.364 [WARNING][4533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" HandleID="k8s-pod-network.6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" May 10 02:17:01.370885 env[1300]: 2025-05-10 02:17:01.364 [INFO][4533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" HandleID="k8s-pod-network.6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" Workload="srv--it8yl.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--6qgjr-eth0" May 10 02:17:01.370885 env[1300]: 2025-05-10 02:17:01.366 [INFO][4533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:01.370885 env[1300]: 2025-05-10 02:17:01.368 [INFO][4526] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33" May 10 02:17:01.372910 env[1300]: time="2025-05-10T02:17:01.370847180Z" level=info msg="TearDown network for sandbox \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\" successfully" May 10 02:17:01.382435 env[1300]: time="2025-05-10T02:17:01.382390743Z" level=info msg="RemovePodSandbox \"6792ea971077cb15ea3ce6eea116c03c64aae1e2c8554d66b9c3f23fff728e33\" returns successfully" May 10 02:17:01.383205 env[1300]: time="2025-05-10T02:17:01.383167223Z" level=info msg="StopPodSandbox for \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\"" May 10 02:17:01.519711 env[1300]: time="2025-05-10T02:17:01.519088638Z" level=info msg="StopPodSandbox for \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\"" May 10 02:17:01.536533 env[1300]: time="2025-05-10T02:17:01.536477099Z" level=info msg="StopPodSandbox for \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\"" May 10 02:17:01.561830 env[1300]: 2025-05-10 02:17:01.464 [WARNING][4553] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0", GenerateName:"calico-kube-controllers-6d9c5fc9f8-", Namespace:"calico-system", SelfLink:"", UID:"26d91028-bb97-4013-8daa-28d750dcefc1", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9c5fc9f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0", Pod:"calico-kube-controllers-6d9c5fc9f8-m6gc5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37a65ea8dcd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:01.561830 env[1300]: 2025-05-10 02:17:01.464 [INFO][4553] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" May 10 02:17:01.561830 env[1300]: 2025-05-10 02:17:01.464 [INFO][4553] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" iface="eth0" netns="" May 10 02:17:01.561830 env[1300]: 2025-05-10 02:17:01.464 [INFO][4553] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" May 10 02:17:01.561830 env[1300]: 2025-05-10 02:17:01.464 [INFO][4553] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" May 10 02:17:01.561830 env[1300]: 2025-05-10 02:17:01.515 [INFO][4560] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" HandleID="k8s-pod-network.8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:17:01.561830 env[1300]: 2025-05-10 02:17:01.515 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:01.561830 env[1300]: 2025-05-10 02:17:01.516 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:01.561830 env[1300]: 2025-05-10 02:17:01.531 [WARNING][4560] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" HandleID="k8s-pod-network.8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:17:01.561830 env[1300]: 2025-05-10 02:17:01.531 [INFO][4560] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" HandleID="k8s-pod-network.8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:17:01.561830 env[1300]: 2025-05-10 02:17:01.533 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:01.561830 env[1300]: 2025-05-10 02:17:01.546 [INFO][4553] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" May 10 02:17:01.562743 env[1300]: time="2025-05-10T02:17:01.561854846Z" level=info msg="TearDown network for sandbox \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\" successfully" May 10 02:17:01.562743 env[1300]: time="2025-05-10T02:17:01.561895151Z" level=info msg="StopPodSandbox for \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\" returns successfully" May 10 02:17:01.580285 env[1300]: time="2025-05-10T02:17:01.580237313Z" level=info msg="RemovePodSandbox for \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\"" May 10 02:17:01.580475 env[1300]: time="2025-05-10T02:17:01.580293329Z" level=info msg="Forcibly stopping sandbox \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\"" May 10 02:17:01.801902 env[1300]: 2025-05-10 02:17:01.709 [INFO][4607] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:17:01.801902 env[1300]: 2025-05-10 02:17:01.709 [INFO][4607] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" iface="eth0" netns="/var/run/netns/cni-3da04719-c39a-c673-fc86-6b9cc9ea1284" May 10 02:17:01.801902 env[1300]: 2025-05-10 02:17:01.710 [INFO][4607] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" iface="eth0" netns="/var/run/netns/cni-3da04719-c39a-c673-fc86-6b9cc9ea1284" May 10 02:17:01.801902 env[1300]: 2025-05-10 02:17:01.711 [INFO][4607] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" iface="eth0" netns="/var/run/netns/cni-3da04719-c39a-c673-fc86-6b9cc9ea1284" May 10 02:17:01.801902 env[1300]: 2025-05-10 02:17:01.711 [INFO][4607] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:17:01.801902 env[1300]: 2025-05-10 02:17:01.711 [INFO][4607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:17:01.801902 env[1300]: 2025-05-10 02:17:01.772 [INFO][4634] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" HandleID="k8s-pod-network.f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:17:01.801902 env[1300]: 2025-05-10 02:17:01.773 [INFO][4634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:01.801902 env[1300]: 2025-05-10 02:17:01.773 [INFO][4634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:01.801902 env[1300]: 2025-05-10 02:17:01.793 [WARNING][4634] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" HandleID="k8s-pod-network.f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:17:01.801902 env[1300]: 2025-05-10 02:17:01.793 [INFO][4634] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" HandleID="k8s-pod-network.f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:17:01.801902 env[1300]: 2025-05-10 02:17:01.797 [INFO][4634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:01.801902 env[1300]: 2025-05-10 02:17:01.799 [INFO][4607] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:17:01.809922 env[1300]: time="2025-05-10T02:17:01.808995723Z" level=info msg="TearDown network for sandbox \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\" successfully" May 10 02:17:01.809922 env[1300]: time="2025-05-10T02:17:01.809043908Z" level=info msg="StopPodSandbox for \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\" returns successfully" May 10 02:17:01.810586 systemd[1]: run-netns-cni\x2d3da04719\x2dc39a\x2dc673\x2dfc86\x2d6b9cc9ea1284.mount: Deactivated successfully. May 10 02:17:01.814526 env[1300]: time="2025-05-10T02:17:01.814486901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6df465c-9n96s,Uid:cd3e93bd-3e59-43f7-987b-d85581ad5591,Namespace:calico-apiserver,Attempt:1,}" May 10 02:17:01.894790 env[1300]: 2025-05-10 02:17:01.649 [INFO][4589] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:17:01.894790 env[1300]: 2025-05-10 02:17:01.649 [INFO][4589] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" iface="eth0" netns="/var/run/netns/cni-ba81a4cc-9c00-07d0-e404-11f38b3bdf2d" May 10 02:17:01.894790 env[1300]: 2025-05-10 02:17:01.650 [INFO][4589] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" iface="eth0" netns="/var/run/netns/cni-ba81a4cc-9c00-07d0-e404-11f38b3bdf2d" May 10 02:17:01.894790 env[1300]: 2025-05-10 02:17:01.651 [INFO][4589] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" iface="eth0" netns="/var/run/netns/cni-ba81a4cc-9c00-07d0-e404-11f38b3bdf2d" May 10 02:17:01.894790 env[1300]: 2025-05-10 02:17:01.651 [INFO][4589] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:17:01.894790 env[1300]: 2025-05-10 02:17:01.651 [INFO][4589] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:17:01.894790 env[1300]: 2025-05-10 02:17:01.872 [INFO][4624] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" HandleID="k8s-pod-network.a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:17:01.894790 env[1300]: 2025-05-10 02:17:01.872 [INFO][4624] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:01.894790 env[1300]: 2025-05-10 02:17:01.872 [INFO][4624] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:01.894790 env[1300]: 2025-05-10 02:17:01.882 [WARNING][4624] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" HandleID="k8s-pod-network.a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:17:01.894790 env[1300]: 2025-05-10 02:17:01.882 [INFO][4624] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" HandleID="k8s-pod-network.a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:17:01.894790 env[1300]: 2025-05-10 02:17:01.884 [INFO][4624] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:01.894790 env[1300]: 2025-05-10 02:17:01.892 [INFO][4589] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:17:01.899678 systemd[1]: run-netns-cni\x2dba81a4cc\x2d9c00\x2d07d0\x2de404\x2d11f38b3bdf2d.mount: Deactivated successfully. May 10 02:17:01.901213 env[1300]: time="2025-05-10T02:17:01.901149597Z" level=info msg="TearDown network for sandbox \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\" successfully" May 10 02:17:01.901327 env[1300]: time="2025-05-10T02:17:01.901211121Z" level=info msg="StopPodSandbox for \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\" returns successfully" May 10 02:17:01.902136 env[1300]: time="2025-05-10T02:17:01.902099897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6df465c-t7qd5,Uid:3bda719f-2d0c-40dd-8013-db678548720f,Namespace:calico-apiserver,Attempt:1,}" May 10 02:17:01.943125 env[1300]: time="2025-05-10T02:17:01.943070393Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:17:01.961969 env[1300]: time="2025-05-10T02:17:01.959011856Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:17:01.965908 env[1300]: time="2025-05-10T02:17:01.965873812Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:17:01.980917 env[1300]: 2025-05-10 02:17:01.830 [WARNING][4623] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0", GenerateName:"calico-kube-controllers-6d9c5fc9f8-", Namespace:"calico-system", SelfLink:"", UID:"26d91028-bb97-4013-8daa-28d750dcefc1", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9c5fc9f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0", Pod:"calico-kube-controllers-6d9c5fc9f8-m6gc5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37a65ea8dcd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:01.980917 env[1300]: 2025-05-10 02:17:01.835 [INFO][4623] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" May 10 02:17:01.980917 env[1300]: 2025-05-10 02:17:01.835 [INFO][4623] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" iface="eth0" netns="" May 10 02:17:01.980917 env[1300]: 2025-05-10 02:17:01.835 [INFO][4623] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" May 10 02:17:01.980917 env[1300]: 2025-05-10 02:17:01.835 [INFO][4623] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" May 10 02:17:01.980917 env[1300]: 2025-05-10 02:17:01.956 [INFO][4642] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" HandleID="k8s-pod-network.8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:17:01.980917 env[1300]: 2025-05-10 02:17:01.959 [INFO][4642] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:01.980917 env[1300]: 2025-05-10 02:17:01.959 [INFO][4642] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:01.980917 env[1300]: 2025-05-10 02:17:01.972 [WARNING][4642] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" HandleID="k8s-pod-network.8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:17:01.980917 env[1300]: 2025-05-10 02:17:01.972 [INFO][4642] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" HandleID="k8s-pod-network.8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:17:01.980917 env[1300]: 2025-05-10 02:17:01.976 [INFO][4642] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:01.980917 env[1300]: 2025-05-10 02:17:01.979 [INFO][4623] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05" May 10 02:17:01.981906 env[1300]: time="2025-05-10T02:17:01.981863651Z" level=info msg="TearDown network for sandbox \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\" successfully" May 10 02:17:01.989642 env[1300]: time="2025-05-10T02:17:01.989587653Z" level=info msg="RemovePodSandbox \"8fe0ec129b6af96f437db87bde4205fd65be569edd059e3d512d1af9a1760f05\" returns successfully" May 10 02:17:01.989907 env[1300]: time="2025-05-10T02:17:01.989870581Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:17:01.990841 env[1300]: time="2025-05-10T02:17:01.990805094Z" level=info msg="StopPodSandbox for \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\"" May 10 02:17:01.991475 env[1300]: time="2025-05-10T02:17:01.991427385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 10 02:17:01.997365 env[1300]: time="2025-05-10T02:17:01.997316833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 10 02:17:02.005245 env[1300]: time="2025-05-10T02:17:02.005190077Z" level=info msg="CreateContainer within sandbox \"bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 02:17:02.016055 env[1300]: time="2025-05-10T02:17:02.016008915Z" level=info msg="CreateContainer within sandbox \"bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d72a16e9f84baeceda6c3f52bdc983c15a99b2944947cb7fa9aac6b064593916\"" May 10 02:17:02.021178 env[1300]: time="2025-05-10T02:17:02.021138443Z" level=info msg="StartContainer for \"d72a16e9f84baeceda6c3f52bdc983c15a99b2944947cb7fa9aac6b064593916\"" May 10 02:17:02.215214 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 10 02:17:02.215433 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali27bc54b3006: link becomes ready May 10 02:17:02.217876 systemd-networkd[1074]: cali27bc54b3006: Link UP May 10 02:17:02.218190 systemd-networkd[1074]: cali27bc54b3006: Gained carrier May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:01.979 [INFO][4643] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0 calico-apiserver-59c6df465c- calico-apiserver cd3e93bd-3e59-43f7-987b-d85581ad5591 879 0 2025-05-10 02:16:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59c6df465c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-it8yl.gb1.brightbox.com calico-apiserver-59c6df465c-9n96s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali27bc54b3006 [] []}} ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Namespace="calico-apiserver" Pod="calico-apiserver-59c6df465c-9n96s" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-" May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:01.979 [INFO][4643] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Namespace="calico-apiserver" Pod="calico-apiserver-59c6df465c-9n96s" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:02.088 [INFO][4680] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" HandleID="k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:02.110 [INFO][4680] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" HandleID="k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc8a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-it8yl.gb1.brightbox.com", "pod":"calico-apiserver-59c6df465c-9n96s", "timestamp":"2025-05-10 02:17:02.08802212 +0000 UTC"}, Hostname:"srv-it8yl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:02.110 [INFO][4680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:02.110 [INFO][4680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:02.111 [INFO][4680] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-it8yl.gb1.brightbox.com' May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:02.113 [INFO][4680] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:02.121 [INFO][4680] ipam/ipam.go 372: Looking up existing affinities for host host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:02.130 [INFO][4680] ipam/ipam.go 489: Trying affinity for 192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:02.148 [INFO][4680] ipam/ipam.go 155: Attempting to load block cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:02.154 [INFO][4680] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:02.154 [INFO][4680] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:02.160 [INFO][4680] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731 May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:02.170 [INFO][4680] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:02.187 [INFO][4680] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.123.134/26] block=192.168.123.128/26 handle="k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:02.187 [INFO][4680] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.123.134/26] handle="k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:02.187 [INFO][4680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:02.300136 env[1300]: 2025-05-10 02:17:02.187 [INFO][4680] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.134/26] IPv6=[] ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" HandleID="k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:17:02.303092 env[1300]: 2025-05-10 02:17:02.202 [INFO][4643] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Namespace="calico-apiserver" Pod="calico-apiserver-59c6df465c-9n96s" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0", GenerateName:"calico-apiserver-59c6df465c-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd3e93bd-3e59-43f7-987b-d85581ad5591", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6df465c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-59c6df465c-9n96s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27bc54b3006", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:02.303092 env[1300]: 2025-05-10 02:17:02.202 [INFO][4643] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.123.134/32] ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Namespace="calico-apiserver" Pod="calico-apiserver-59c6df465c-9n96s" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:17:02.303092 env[1300]: 2025-05-10 02:17:02.202 [INFO][4643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27bc54b3006 ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Namespace="calico-apiserver" Pod="calico-apiserver-59c6df465c-9n96s" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:17:02.303092 env[1300]: 2025-05-10 02:17:02.222 [INFO][4643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Namespace="calico-apiserver" Pod="calico-apiserver-59c6df465c-9n96s" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:17:02.303092 env[1300]: 2025-05-10 02:17:02.224 [INFO][4643] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Namespace="calico-apiserver" Pod="calico-apiserver-59c6df465c-9n96s" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0", GenerateName:"calico-apiserver-59c6df465c-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd3e93bd-3e59-43f7-987b-d85581ad5591", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6df465c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731", Pod:"calico-apiserver-59c6df465c-9n96s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27bc54b3006", MAC:"36:9c:cc:ab:f5:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:02.303092 env[1300]: 2025-05-10 02:17:02.263 [INFO][4643] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Namespace="calico-apiserver" Pod="calico-apiserver-59c6df465c-9n96s" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:17:02.349092 kernel: kauditd_printk_skb: 536 callbacks suppressed May 10 02:17:02.350460 kernel: audit: type=1325 audit(1746843422.335:416): table=filter:110 family=2 entries=46 op=nft_register_chain pid=4750 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 10 02:17:02.352536 kernel: audit: type=1300 audit(1746843422.335:416): arch=c000003e syscall=46 success=yes exit=23876 a0=3 a1=7ffe0cc46360 a2=0 a3=7ffe0cc4634c items=0 ppid=3620 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:02.335000 audit[4750]: NETFILTER_CFG table=filter:110 family=2 entries=46 op=nft_register_chain pid=4750 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 10 02:17:02.335000 audit[4750]: SYSCALL arch=c000003e syscall=46 success=yes exit=23876 a0=3 a1=7ffe0cc46360 a2=0 a3=7ffe0cc4634c items=0 ppid=3620 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:02.335000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 10 02:17:02.361741 kernel: audit: type=1327 audit(1746843422.335:416): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 10 02:17:02.430448 env[1300]: time="2025-05-10T02:17:02.425731855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 02:17:02.430448 env[1300]: time="2025-05-10T02:17:02.425999609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 02:17:02.430448 env[1300]: time="2025-05-10T02:17:02.426106181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 02:17:02.430448 env[1300]: time="2025-05-10T02:17:02.426502941Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731 pid=4774 runtime=io.containerd.runc.v2 May 10 02:17:02.449931 env[1300]: time="2025-05-10T02:17:02.448180585Z" level=info msg="StartContainer for \"d72a16e9f84baeceda6c3f52bdc983c15a99b2944947cb7fa9aac6b064593916\" returns successfully" May 10 02:17:02.494923 env[1300]: 2025-05-10 02:17:02.195 [WARNING][4703] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b62d1657-f00b-4957-b89b-113ee88c8696", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4", Pod:"csi-node-driver-msjzq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1d5e5987e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:02.494923 env[1300]: 2025-05-10 02:17:02.195 [INFO][4703] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" May 10 02:17:02.494923 env[1300]: 2025-05-10 02:17:02.195 [INFO][4703] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" iface="eth0" netns="" May 10 02:17:02.494923 env[1300]: 2025-05-10 02:17:02.195 [INFO][4703] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" May 10 02:17:02.494923 env[1300]: 2025-05-10 02:17:02.195 [INFO][4703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" May 10 02:17:02.494923 env[1300]: 2025-05-10 02:17:02.414 [INFO][4743] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" HandleID="k8s-pod-network.8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" Workload="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" May 10 02:17:02.494923 env[1300]: 2025-05-10 02:17:02.416 [INFO][4743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:02.494923 env[1300]: 2025-05-10 02:17:02.445 [INFO][4743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:02.494923 env[1300]: 2025-05-10 02:17:02.475 [WARNING][4743] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" HandleID="k8s-pod-network.8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" Workload="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" May 10 02:17:02.494923 env[1300]: 2025-05-10 02:17:02.475 [INFO][4743] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" HandleID="k8s-pod-network.8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" Workload="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" May 10 02:17:02.494923 env[1300]: 2025-05-10 02:17:02.479 [INFO][4743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:02.494923 env[1300]: 2025-05-10 02:17:02.488 [INFO][4703] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" May 10 02:17:02.494923 env[1300]: time="2025-05-10T02:17:02.493652191Z" level=info msg="TearDown network for sandbox \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\" successfully" May 10 02:17:02.494923 env[1300]: time="2025-05-10T02:17:02.493694231Z" level=info msg="StopPodSandbox for \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\" returns successfully" May 10 02:17:02.496292 env[1300]: time="2025-05-10T02:17:02.495183684Z" level=info msg="RemovePodSandbox for \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\"" May 10 02:17:02.496292 env[1300]: time="2025-05-10T02:17:02.495224638Z" level=info msg="Forcibly stopping sandbox \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\"" May 10 02:17:02.510782 systemd-networkd[1074]: cali6229da7e49a: Link UP May 10 02:17:02.519082 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6229da7e49a: link becomes ready May 10 02:17:02.518692 systemd-networkd[1074]: cali6229da7e49a: Gained carrier May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.073 [INFO][4662] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0 calico-apiserver-59c6df465c- calico-apiserver 3bda719f-2d0c-40dd-8013-db678548720f 877 0 2025-05-10 02:16:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59c6df465c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-it8yl.gb1.brightbox.com calico-apiserver-59c6df465c-t7qd5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6229da7e49a [] []}} ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Namespace="calico-apiserver" Pod="calico-apiserver-59c6df465c-t7qd5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-" May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.073 [INFO][4662] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Namespace="calico-apiserver" Pod="calico-apiserver-59c6df465c-t7qd5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.371 [INFO][4712] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" HandleID="k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.393 [INFO][4712] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" HandleID="k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000315a70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-it8yl.gb1.brightbox.com", "pod":"calico-apiserver-59c6df465c-t7qd5", "timestamp":"2025-05-10 02:17:02.371852466 +0000 UTC"}, Hostname:"srv-it8yl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.393 [INFO][4712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.393 [INFO][4712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.393 [INFO][4712] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-it8yl.gb1.brightbox.com' May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.396 [INFO][4712] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.402 [INFO][4712] ipam/ipam.go 372: Looking up existing affinities for host host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.412 [INFO][4712] ipam/ipam.go 489: Trying affinity for 192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.415 [INFO][4712] ipam/ipam.go 155: Attempting to load block cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.419 [INFO][4712] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.419 [INFO][4712] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.421 [INFO][4712] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5 May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.431 [INFO][4712] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.444 [INFO][4712] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.123.135/26] block=192.168.123.128/26 handle="k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.445 [INFO][4712] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.123.135/26] handle="k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.445 [INFO][4712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:02.571668 env[1300]: 2025-05-10 02:17:02.445 [INFO][4712] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.135/26] IPv6=[] ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" HandleID="k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:17:02.577356 env[1300]: 2025-05-10 02:17:02.476 [INFO][4662] cni-plugin/k8s.go 386: Populated endpoint ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Namespace="calico-apiserver" Pod="calico-apiserver-59c6df465c-t7qd5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0", GenerateName:"calico-apiserver-59c6df465c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3bda719f-2d0c-40dd-8013-db678548720f", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6df465c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-59c6df465c-t7qd5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6229da7e49a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:02.577356 env[1300]: 2025-05-10 02:17:02.477 [INFO][4662] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.123.135/32] ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Namespace="calico-apiserver" Pod="calico-apiserver-59c6df465c-t7qd5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:17:02.577356 env[1300]: 2025-05-10 02:17:02.477 [INFO][4662] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6229da7e49a ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Namespace="calico-apiserver" Pod="calico-apiserver-59c6df465c-t7qd5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:17:02.577356 env[1300]: 2025-05-10 02:17:02.532 [INFO][4662] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Namespace="calico-apiserver" Pod="calico-apiserver-59c6df465c-t7qd5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:17:02.577356 env[1300]: 2025-05-10 02:17:02.536 [INFO][4662] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Namespace="calico-apiserver" Pod="calico-apiserver-59c6df465c-t7qd5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0", GenerateName:"calico-apiserver-59c6df465c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3bda719f-2d0c-40dd-8013-db678548720f", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6df465c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5", Pod:"calico-apiserver-59c6df465c-t7qd5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6229da7e49a", MAC:"e6:c0:a0:6e:8b:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:02.577356 env[1300]: 2025-05-10 02:17:02.564 [INFO][4662] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Namespace="calico-apiserver" Pod="calico-apiserver-59c6df465c-t7qd5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:17:02.635000 audit[4841]: NETFILTER_CFG table=filter:111 family=2 entries=56 op=nft_register_chain pid=4841 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 10 02:17:02.651650 kernel: audit: type=1325 audit(1746843422.635:417): table=filter:111 family=2 entries=56 op=nft_register_chain pid=4841 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 10 02:17:02.651822 kernel: audit: type=1300 audit(1746843422.635:417): arch=c000003e syscall=46 success=yes exit=27916 a0=3 a1=7ffeb7df0e70 a2=0 a3=7ffeb7df0e5c items=0 ppid=3620 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:02.635000 audit[4841]: SYSCALL arch=c000003e syscall=46 success=yes exit=27916 a0=3 a1=7ffeb7df0e70 a2=0 a3=7ffeb7df0e5c items=0 ppid=3620 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:02.635000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 10 02:17:02.659710 kernel: audit: type=1327 audit(1746843422.635:417): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 10 02:17:02.679256 env[1300]: time="2025-05-10T02:17:02.674775647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 02:17:02.679256 env[1300]: time="2025-05-10T02:17:02.674887334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 02:17:02.679256 env[1300]: time="2025-05-10T02:17:02.674947051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 02:17:02.679256 env[1300]: time="2025-05-10T02:17:02.675535246Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5 pid=4849 runtime=io.containerd.runc.v2 May 10 02:17:02.769426 env[1300]: time="2025-05-10T02:17:02.765307729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6df465c-9n96s,Uid:cd3e93bd-3e59-43f7-987b-d85581ad5591,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731\"" May 10 02:17:02.779419 env[1300]: time="2025-05-10T02:17:02.778976909Z" level=info msg="CreateContainer within sandbox \"7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 02:17:02.823858 env[1300]: time="2025-05-10T02:17:02.823787138Z" level=info msg="CreateContainer within sandbox \"7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c671a123216c2968f1584803d7b02d407eaf29c0db19eb524083ccb7448bfbc1\"" May 10 02:17:02.825197 env[1300]: time="2025-05-10T02:17:02.825130153Z" level=info msg="StartContainer for \"c671a123216c2968f1584803d7b02d407eaf29c0db19eb524083ccb7448bfbc1\"" May 10 02:17:02.884378 systemd[1]: run-containerd-runc-k8s.io-c671a123216c2968f1584803d7b02d407eaf29c0db19eb524083ccb7448bfbc1-runc.DIc8xu.mount: Deactivated successfully. May 10 02:17:02.940948 env[1300]: 2025-05-10 02:17:02.708 [WARNING][4825] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b62d1657-f00b-4957-b89b-113ee88c8696", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4", Pod:"csi-node-driver-msjzq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1d5e5987e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:02.940948 env[1300]: 2025-05-10 02:17:02.708 [INFO][4825] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" May 10 02:17:02.940948 env[1300]: 2025-05-10 02:17:02.708 [INFO][4825] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" iface="eth0" netns="" May 10 02:17:02.940948 env[1300]: 2025-05-10 02:17:02.708 [INFO][4825] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" May 10 02:17:02.940948 env[1300]: 2025-05-10 02:17:02.708 [INFO][4825] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" May 10 02:17:02.940948 env[1300]: 2025-05-10 02:17:02.906 [INFO][4870] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" HandleID="k8s-pod-network.8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" Workload="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" May 10 02:17:02.940948 env[1300]: 2025-05-10 02:17:02.907 [INFO][4870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:02.940948 env[1300]: 2025-05-10 02:17:02.907 [INFO][4870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:02.940948 env[1300]: 2025-05-10 02:17:02.919 [WARNING][4870] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" HandleID="k8s-pod-network.8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" Workload="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" May 10 02:17:02.940948 env[1300]: 2025-05-10 02:17:02.919 [INFO][4870] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" HandleID="k8s-pod-network.8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" Workload="srv--it8yl.gb1.brightbox.com-k8s-csi--node--driver--msjzq-eth0" May 10 02:17:02.940948 env[1300]: 2025-05-10 02:17:02.922 [INFO][4870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:02.940948 env[1300]: 2025-05-10 02:17:02.929 [INFO][4825] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f" May 10 02:17:02.942513 env[1300]: time="2025-05-10T02:17:02.941830775Z" level=info msg="TearDown network for sandbox \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\" successfully" May 10 02:17:02.954530 env[1300]: time="2025-05-10T02:17:02.954468073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6df465c-t7qd5,Uid:3bda719f-2d0c-40dd-8013-db678548720f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5\"" May 10 02:17:02.977119 env[1300]: time="2025-05-10T02:17:02.976953418Z" level=info msg="RemovePodSandbox \"8f25ae07e774595e96260f5f7d22be5adeeab20de6b75eee1a0538ce1d258b7f\" returns successfully" May 10 02:17:02.978232 kubelet[2271]: I0510 02:17:02.978161 2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67d5855849-n77bg" podStartSLOduration=34.474636796 podStartE2EDuration="41.978118065s" podCreationTimestamp="2025-05-10 02:16:21 +0000 UTC" firstStartedPulling="2025-05-10 02:16:54.491083622 +0000 UTC m=+54.274265138" lastFinishedPulling="2025-05-10 02:17:01.994564883 +0000 UTC m=+61.777746407" observedRunningTime="2025-05-10 02:17:02.976663571 +0000 UTC m=+62.759845099" watchObservedRunningTime="2025-05-10 02:17:02.978118065 +0000 UTC m=+62.761299601" May 10 02:17:02.983212 env[1300]: time="2025-05-10T02:17:02.981317095Z" level=info msg="StopPodSandbox for \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\"" May 10 02:17:03.021000 audit[4932]: NETFILTER_CFG table=filter:112 family=2 entries=10 op=nft_register_rule pid=4932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:03.030623 env[1300]: time="2025-05-10T02:17:03.028765206Z" level=info msg="CreateContainer within sandbox \"28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 02:17:03.033096 kernel: audit: type=1325 audit(1746843423.021:418): table=filter:112 family=2 entries=10 op=nft_register_rule pid=4932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:03.048736 kernel: audit: type=1300 audit(1746843423.021:418): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffec550ca50 a2=0 a3=7ffec550ca3c items=0 ppid=2455 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:03.048920 kernel: audit: type=1327 audit(1746843423.021:418): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:03.021000 audit[4932]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffec550ca50 a2=0 a3=7ffec550ca3c items=0 ppid=2455 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:03.021000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:03.032000 audit[4932]: NETFILTER_CFG table=nat:113 family=2 entries=20 op=nft_register_rule pid=4932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:03.058671 kernel: audit: type=1325 audit(1746843423.032:419): table=nat:113 family=2 entries=20 op=nft_register_rule pid=4932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:03.032000 audit[4932]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffec550ca50 a2=0 a3=7ffec550ca3c items=0 ppid=2455 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:03.032000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:03.128547 env[1300]: time="2025-05-10T02:17:03.128492599Z" level=info msg="CreateContainer within sandbox \"28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bb8a5bb84c6523eb82d684c52018fb61fd4509c76271b7546e5c06e3307364d5\"" May 10 02:17:03.133469 env[1300]: time="2025-05-10T02:17:03.133430988Z" level=info msg="StartContainer for \"bb8a5bb84c6523eb82d684c52018fb61fd4509c76271b7546e5c06e3307364d5\"" May 10 02:17:03.201542 env[1300]: time="2025-05-10T02:17:03.201490598Z" level=info msg="StartContainer for \"c671a123216c2968f1584803d7b02d407eaf29c0db19eb524083ccb7448bfbc1\" returns successfully" May 10 02:17:03.328335 env[1300]: 2025-05-10 02:17:03.198 [WARNING][4933] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0", GenerateName:"calico-apiserver-67d5855849-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0632dba-48ae-4acf-8c28-096b5737e007", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d5855849", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418", Pod:"calico-apiserver-67d5855849-n77bg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1860a1d3f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:03.328335 env[1300]: 2025-05-10 02:17:03.198 [INFO][4933] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" May 10 02:17:03.328335 env[1300]: 2025-05-10 02:17:03.198 [INFO][4933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" iface="eth0" netns="" May 10 02:17:03.328335 env[1300]: 2025-05-10 02:17:03.198 [INFO][4933] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" May 10 02:17:03.328335 env[1300]: 2025-05-10 02:17:03.198 [INFO][4933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" May 10 02:17:03.328335 env[1300]: 2025-05-10 02:17:03.309 [INFO][4957] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" HandleID="k8s-pod-network.c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" May 10 02:17:03.328335 env[1300]: 2025-05-10 02:17:03.309 [INFO][4957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:03.328335 env[1300]: 2025-05-10 02:17:03.309 [INFO][4957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:03.328335 env[1300]: 2025-05-10 02:17:03.320 [WARNING][4957] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" HandleID="k8s-pod-network.c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" May 10 02:17:03.328335 env[1300]: 2025-05-10 02:17:03.320 [INFO][4957] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" HandleID="k8s-pod-network.c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" May 10 02:17:03.328335 env[1300]: 2025-05-10 02:17:03.322 [INFO][4957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:03.328335 env[1300]: 2025-05-10 02:17:03.324 [INFO][4933] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" May 10 02:17:03.330202 env[1300]: time="2025-05-10T02:17:03.329101851Z" level=info msg="TearDown network for sandbox \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\" successfully" May 10 02:17:03.330202 env[1300]: time="2025-05-10T02:17:03.329160099Z" level=info msg="StopPodSandbox for \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\" returns successfully" May 10 02:17:03.330609 env[1300]: time="2025-05-10T02:17:03.330573657Z" level=info msg="RemovePodSandbox for \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\"" May 10 02:17:03.330899 env[1300]: time="2025-05-10T02:17:03.330795767Z" level=info msg="Forcibly stopping sandbox \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\"" May 10 02:17:03.546154 env[1300]: time="2025-05-10T02:17:03.546088593Z" level=info msg="StartContainer for \"bb8a5bb84c6523eb82d684c52018fb61fd4509c76271b7546e5c06e3307364d5\" returns successfully" May 10 02:17:03.776988 env[1300]: 2025-05-10 02:17:03.613 [WARNING][4992] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0", GenerateName:"calico-apiserver-67d5855849-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0632dba-48ae-4acf-8c28-096b5737e007", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d5855849", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"bc42449b3e2f523563e82fbfef9faebc5b7ae7af03b22133886cb0720fec3418", Pod:"calico-apiserver-67d5855849-n77bg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1860a1d3f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:03.776988 env[1300]: 2025-05-10 02:17:03.613 [INFO][4992] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" May 10 02:17:03.776988 env[1300]: 2025-05-10 02:17:03.613 [INFO][4992] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" iface="eth0" netns="" May 10 02:17:03.776988 env[1300]: 2025-05-10 02:17:03.613 [INFO][4992] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" May 10 02:17:03.776988 env[1300]: 2025-05-10 02:17:03.613 [INFO][4992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" May 10 02:17:03.776988 env[1300]: 2025-05-10 02:17:03.725 [INFO][5008] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" HandleID="k8s-pod-network.c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" May 10 02:17:03.776988 env[1300]: 2025-05-10 02:17:03.725 [INFO][5008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:03.776988 env[1300]: 2025-05-10 02:17:03.725 [INFO][5008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:03.776988 env[1300]: 2025-05-10 02:17:03.758 [WARNING][5008] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" HandleID="k8s-pod-network.c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" May 10 02:17:03.776988 env[1300]: 2025-05-10 02:17:03.758 [INFO][5008] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" HandleID="k8s-pod-network.c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--n77bg-eth0" May 10 02:17:03.776988 env[1300]: 2025-05-10 02:17:03.766 [INFO][5008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:03.776988 env[1300]: 2025-05-10 02:17:03.772 [INFO][4992] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9" May 10 02:17:03.780763 env[1300]: time="2025-05-10T02:17:03.780704062Z" level=info msg="TearDown network for sandbox \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\" successfully" May 10 02:17:03.793272 env[1300]: time="2025-05-10T02:17:03.793213171Z" level=info msg="RemovePodSandbox \"c518eda6f060f344de6eaff76c9c32aaa4fb296736c5ec3b37adbc97e50ca2b9\" returns successfully" May 10 02:17:03.903111 systemd-networkd[1074]: cali6229da7e49a: Gained IPv6LL May 10 02:17:03.981059 kubelet[2271]: I0510 02:17:03.979202 2271 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 02:17:04.036907 kubelet[2271]: I0510 02:17:04.036680 2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59c6df465c-9n96s" podStartSLOduration=45.036642518 podStartE2EDuration="45.036642518s" podCreationTimestamp="2025-05-10 02:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 02:17:03.997613219 +0000 UTC m=+63.780794755" watchObservedRunningTime="2025-05-10 02:17:04.036642518 +0000 UTC m=+63.819824042" May 10 02:17:04.158941 systemd-networkd[1074]: cali27bc54b3006: Gained IPv6LL May 10 02:17:04.223000 audit[5020]: NETFILTER_CFG table=filter:114 family=2 entries=10 op=nft_register_rule pid=5020 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:04.223000 audit[5020]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe25c55b00 a2=0 a3=7ffe25c55aec items=0 ppid=2455 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:04.223000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:04.230000 audit[5020]: NETFILTER_CFG table=nat:115 family=2 entries=20 op=nft_register_rule pid=5020 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:04.230000 audit[5020]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe25c55b00 a2=0 a3=7ffe25c55aec items=0 ppid=2455 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:04.230000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:04.262000 audit[5022]: NETFILTER_CFG table=filter:116 family=2 entries=10 op=nft_register_rule pid=5022 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:04.262000 audit[5022]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffeb8369760 a2=0 a3=7ffeb836974c items=0 ppid=2455 pid=5022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:04.262000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:04.267000 audit[5022]: NETFILTER_CFG table=nat:117 family=2 entries=20 op=nft_register_rule pid=5022 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:04.267000 audit[5022]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffeb8369760 a2=0 a3=7ffeb836974c items=0 ppid=2455 pid=5022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:04.267000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:04.449531 env[1300]: time="2025-05-10T02:17:04.449470299Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:17:04.452725 env[1300]: time="2025-05-10T02:17:04.452670476Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:17:04.457074 env[1300]: time="2025-05-10T02:17:04.457033489Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:17:04.459928 env[1300]: time="2025-05-10T02:17:04.459862529Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:17:04.461739 env[1300]: time="2025-05-10T02:17:04.460893894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 10 02:17:04.468794 env[1300]: time="2025-05-10T02:17:04.467369615Z" level=info msg="CreateContainer within sandbox \"b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 10 02:17:04.512123 env[1300]: time="2025-05-10T02:17:04.512038469Z" level=info msg="CreateContainer within sandbox \"b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"08da04e05eb5d4bd43d986b70f9552197ddf3bb0c9ab5b593e79181bbb48ce67\"" May 10 02:17:04.514455 env[1300]: time="2025-05-10T02:17:04.513560718Z" level=info msg="StartContainer for \"08da04e05eb5d4bd43d986b70f9552197ddf3bb0c9ab5b593e79181bbb48ce67\"" May 10 02:17:04.768159 env[1300]: time="2025-05-10T02:17:04.768034315Z" level=info msg="StartContainer for \"08da04e05eb5d4bd43d986b70f9552197ddf3bb0c9ab5b593e79181bbb48ce67\" returns successfully" May 10 02:17:04.773280 env[1300]: time="2025-05-10T02:17:04.773239268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 10 02:17:04.813132 systemd[1]: run-containerd-runc-k8s.io-08da04e05eb5d4bd43d986b70f9552197ddf3bb0c9ab5b593e79181bbb48ce67-runc.ugkQ2i.mount: Deactivated successfully. May 10 02:17:05.430329 kubelet[2271]: I0510 02:17:05.430234 2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59c6df465c-t7qd5" podStartSLOduration=45.430182201 podStartE2EDuration="45.430182201s" podCreationTimestamp="2025-05-10 02:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 02:17:04.039220465 +0000 UTC m=+63.822402000" watchObservedRunningTime="2025-05-10 02:17:05.430182201 +0000 UTC m=+65.213363730" May 10 02:17:05.493000 audit[5064]: NETFILTER_CFG table=filter:118 family=2 entries=9 op=nft_register_rule pid=5064 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:05.493000 audit[5064]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff0c0c1c70 a2=0 a3=7fff0c0c1c5c items=0 ppid=2455 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:05.493000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:05.498000 audit[5064]: NETFILTER_CFG table=nat:119 family=2 entries=27 op=nft_register_chain pid=5064 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:05.498000 audit[5064]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7fff0c0c1c70 a2=0 a3=7fff0c0c1c5c items=0 ppid=2455 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:05.498000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:06.532000 audit[5066]: NETFILTER_CFG table=filter:120 family=2 entries=8 op=nft_register_rule pid=5066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:06.532000 audit[5066]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffef67ed510 a2=0 a3=7ffef67ed4fc items=0 ppid=2455 pid=5066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:06.532000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:06.540000 audit[5066]: NETFILTER_CFG table=nat:121 family=2 entries=34 op=nft_register_chain pid=5066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:06.540000 audit[5066]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffef67ed510 a2=0 a3=7ffef67ed4fc items=0 ppid=2455 pid=5066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:06.540000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:07.127522 systemd[1]: run-containerd-runc-k8s.io-991459f3270f0e0088ce6bf0b9879acad4af013408a99faaf796c87169795ef4-runc.OPScRS.mount: Deactivated successfully. May 10 02:17:07.131056 env[1300]: time="2025-05-10T02:17:07.131000874Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:17:07.133469 env[1300]: time="2025-05-10T02:17:07.133426716Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:17:07.140714 env[1300]: time="2025-05-10T02:17:07.138478853Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:17:07.140714 env[1300]: time="2025-05-10T02:17:07.139336451Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 02:17:07.141151 env[1300]: time="2025-05-10T02:17:07.141106024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 10 02:17:07.171345 env[1300]: time="2025-05-10T02:17:07.170793588Z" level=info msg="CreateContainer within sandbox \"b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 10 02:17:07.188618 env[1300]: time="2025-05-10T02:17:07.188575178Z" level=info msg="CreateContainer within sandbox \"b786a685a7caf9725add7d182bc341d859bf3f0bc29bf39fec4ba38af37b41f4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e3695db4bae95f5bc31cf0f628dff5059f864ddc5e3815d57ca2aaa2c0352b26\"" May 10 02:17:07.196660 env[1300]: time="2025-05-10T02:17:07.196606820Z" level=info msg="StartContainer for \"e3695db4bae95f5bc31cf0f628dff5059f864ddc5e3815d57ca2aaa2c0352b26\"" May 10 02:17:07.311494 env[1300]: time="2025-05-10T02:17:07.311439987Z" level=info msg="StartContainer for \"e3695db4bae95f5bc31cf0f628dff5059f864ddc5e3815d57ca2aaa2c0352b26\" returns successfully" May 10 02:17:07.807290 kubelet[2271]: I0510 02:17:07.804849 2271 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 10 02:17:07.814471 kubelet[2271]: I0510 02:17:07.814427 2271 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 10 02:17:13.512134 systemd[1]: run-containerd-runc-k8s.io-0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d-runc.iO6OJm.mount: Deactivated successfully. May 10 02:17:13.889943 kubelet[2271]: I0510 02:17:13.879745 2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-msjzq" podStartSLOduration=41.516226257 podStartE2EDuration="53.86237441s" podCreationTimestamp="2025-05-10 02:16:20 +0000 UTC" firstStartedPulling="2025-05-10 02:16:54.804027629 +0000 UTC m=+54.587209147" lastFinishedPulling="2025-05-10 02:17:07.150175779 +0000 UTC m=+66.933357300" observedRunningTime="2025-05-10 02:17:08.052049569 +0000 UTC m=+67.835231103" watchObservedRunningTime="2025-05-10 02:17:13.86237441 +0000 UTC m=+73.645555939" May 10 02:17:14.557543 systemd[1]: run-containerd-runc-k8s.io-991459f3270f0e0088ce6bf0b9879acad4af013408a99faaf796c87169795ef4-runc.8PgBeQ.mount: Deactivated successfully. May 10 02:17:16.744731 env[1300]: time="2025-05-10T02:17:16.744120784Z" level=info msg="StopContainer for \"9995f89b5a405fefff0e961c6d464c4d0af7232fb574d39abcbd915393645eff\" with timeout 300 (s)" May 10 02:17:16.745478 env[1300]: time="2025-05-10T02:17:16.745417908Z" level=info msg="Stop container \"9995f89b5a405fefff0e961c6d464c4d0af7232fb574d39abcbd915393645eff\" with signal terminated" May 10 02:17:16.823000 audit[5179]: NETFILTER_CFG table=filter:122 family=2 entries=8 op=nft_register_rule pid=5179 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:16.828536 kernel: kauditd_printk_skb: 26 callbacks suppressed May 10 02:17:16.829741 kernel: audit: type=1325 audit(1746843436.823:428): table=filter:122 family=2 entries=8 op=nft_register_rule pid=5179 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:16.823000 audit[5179]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdc2adf2a0 a2=0 a3=7ffdc2adf28c items=0 ppid=2455 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:16.840664 kernel: audit: type=1300 audit(1746843436.823:428): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdc2adf2a0 a2=0 a3=7ffdc2adf28c items=0 ppid=2455 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:16.823000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:16.846656 kernel: audit: type=1327 audit(1746843436.823:428): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:16.846000 audit[5179]: NETFILTER_CFG table=nat:123 family=2 entries=30 op=nft_register_rule pid=5179 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:16.852661 kernel: audit: type=1325 audit(1746843436.846:429): table=nat:123 family=2 entries=30 op=nft_register_rule pid=5179 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:16.846000 audit[5179]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffdc2adf2a0 a2=0 a3=7ffdc2adf28c items=0 ppid=2455 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:16.846000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:16.866454 kernel: audit: type=1300 audit(1746843436.846:429): arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffdc2adf2a0 a2=0 a3=7ffdc2adf28c items=0 ppid=2455 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:16.866597 kernel: audit: type=1327 audit(1746843436.846:429): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:16.911115 systemd[1]: run-containerd-runc-k8s.io-0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d-runc.ksPiGn.mount: Deactivated successfully. May 10 02:17:17.096456 env[1300]: time="2025-05-10T02:17:17.096396461Z" level=info msg="StopContainer for \"991459f3270f0e0088ce6bf0b9879acad4af013408a99faaf796c87169795ef4\" with timeout 30 (s)" May 10 02:17:17.097288 env[1300]: time="2025-05-10T02:17:17.097243158Z" level=info msg="Stop container \"991459f3270f0e0088ce6bf0b9879acad4af013408a99faaf796c87169795ef4\" with signal terminated" May 10 02:17:17.302374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-991459f3270f0e0088ce6bf0b9879acad4af013408a99faaf796c87169795ef4-rootfs.mount: Deactivated successfully. May 10 02:17:17.303319 env[1300]: time="2025-05-10T02:17:17.302947502Z" level=info msg="shim disconnected" id=991459f3270f0e0088ce6bf0b9879acad4af013408a99faaf796c87169795ef4 May 10 02:17:17.303319 env[1300]: time="2025-05-10T02:17:17.303095601Z" level=warning msg="cleaning up after shim disconnected" id=991459f3270f0e0088ce6bf0b9879acad4af013408a99faaf796c87169795ef4 namespace=k8s.io May 10 02:17:17.303319 env[1300]: time="2025-05-10T02:17:17.303118115Z" level=info msg="cleaning up dead shim" May 10 02:17:17.326959 env[1300]: time="2025-05-10T02:17:17.326902257Z" level=warning msg="cleanup warnings time=\"2025-05-10T02:17:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5218 runtime=io.containerd.runc.v2\n" May 10 02:17:17.338099 env[1300]: time="2025-05-10T02:17:17.338053733Z" level=info msg="StopContainer for \"991459f3270f0e0088ce6bf0b9879acad4af013408a99faaf796c87169795ef4\" returns successfully" May 10 02:17:17.342168 env[1300]: time="2025-05-10T02:17:17.342132217Z" level=info msg="StopPodSandbox for \"61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0\"" May 10 02:17:17.342563 env[1300]: time="2025-05-10T02:17:17.342520755Z" level=info msg="Container to stop \"991459f3270f0e0088ce6bf0b9879acad4af013408a99faaf796c87169795ef4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 02:17:17.347142 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0-shm.mount: Deactivated successfully. May 10 02:17:17.435536 env[1300]: time="2025-05-10T02:17:17.435448979Z" level=info msg="shim disconnected" id=61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0 May 10 02:17:17.435536 env[1300]: time="2025-05-10T02:17:17.435525805Z" level=warning msg="cleaning up after shim disconnected" id=61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0 namespace=k8s.io May 10 02:17:17.435536 env[1300]: time="2025-05-10T02:17:17.435543206Z" level=info msg="cleaning up dead shim" May 10 02:17:17.484221 env[1300]: time="2025-05-10T02:17:17.484138759Z" level=warning msg="cleanup warnings time=\"2025-05-10T02:17:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5249 runtime=io.containerd.runc.v2\ntime=\"2025-05-10T02:17:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" May 10 02:17:17.498898 env[1300]: time="2025-05-10T02:17:17.498818198Z" level=info msg="StopContainer for \"0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d\" with timeout 5 (s)" May 10 02:17:17.501654 env[1300]: time="2025-05-10T02:17:17.499736613Z" level=info msg="Stop container \"0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d\" with signal terminated" May 10 02:17:17.726317 env[1300]: time="2025-05-10T02:17:17.726114792Z" level=info msg="shim disconnected" id=0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d May 10 02:17:17.726688 env[1300]: time="2025-05-10T02:17:17.726596897Z" level=warning msg="cleaning up after shim disconnected" id=0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d namespace=k8s.io May 10 02:17:17.726859 env[1300]: time="2025-05-10T02:17:17.726828801Z" level=info msg="cleaning up dead shim" May 10 02:17:17.776914 env[1300]: time="2025-05-10T02:17:17.776841145Z" level=warning msg="cleanup warnings time=\"2025-05-10T02:17:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5300 runtime=io.containerd.runc.v2\n" May 10 02:17:17.787983 systemd-networkd[1074]: cali37a65ea8dcd: Link DOWN May 10 02:17:17.787998 systemd-networkd[1074]: cali37a65ea8dcd: Lost carrier May 10 02:17:17.824697 env[1300]: time="2025-05-10T02:17:17.824588355Z" level=info msg="StopContainer for \"0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d\" returns successfully" May 10 02:17:17.826515 env[1300]: time="2025-05-10T02:17:17.826464807Z" level=info msg="StopPodSandbox for \"7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3\"" May 10 02:17:17.826795 env[1300]: time="2025-05-10T02:17:17.826748062Z" level=info msg="Container to stop \"0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 02:17:17.827312 env[1300]: time="2025-05-10T02:17:17.826972711Z" level=info msg="Container to stop \"c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 02:17:17.827312 env[1300]: time="2025-05-10T02:17:17.827009666Z" level=info msg="Container to stop \"fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 02:17:17.908995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0-rootfs.mount: Deactivated successfully. May 10 02:17:17.911718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d-rootfs.mount: Deactivated successfully. May 10 02:17:17.911948 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3-shm.mount: Deactivated successfully. May 10 02:17:17.922342 env[1300]: time="2025-05-10T02:17:17.922284988Z" level=info msg="shim disconnected" id=7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3 May 10 02:17:17.923218 env[1300]: time="2025-05-10T02:17:17.923161628Z" level=warning msg="cleaning up after shim disconnected" id=7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3 namespace=k8s.io May 10 02:17:17.923365 env[1300]: time="2025-05-10T02:17:17.923335578Z" level=info msg="cleaning up dead shim" May 10 02:17:17.928614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3-rootfs.mount: Deactivated successfully. May 10 02:17:17.967968 env[1300]: time="2025-05-10T02:17:17.967905502Z" level=warning msg="cleanup warnings time=\"2025-05-10T02:17:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5345 runtime=io.containerd.runc.v2\n" May 10 02:17:17.974972 env[1300]: time="2025-05-10T02:17:17.974927862Z" level=info msg="TearDown network for sandbox \"7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3\" successfully" May 10 02:17:17.975235 env[1300]: time="2025-05-10T02:17:17.975177885Z" level=info msg="StopPodSandbox for \"7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3\" returns successfully" May 10 02:17:18.074349 kubelet[2271]: I0510 02:17:18.074281 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "a4e48408-11ae-41e1-a9df-671daf11213c" (UID: "a4e48408-11ae-41e1-a9df-671daf11213c"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 02:17:18.076753 kubelet[2271]: I0510 02:17:18.076707 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-cni-bin-dir\") pod \"a4e48408-11ae-41e1-a9df-671daf11213c\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " May 10 02:17:18.076960 kubelet[2271]: I0510 02:17:18.076933 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-xtables-lock\") pod \"a4e48408-11ae-41e1-a9df-671daf11213c\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " May 10 02:17:18.077167 kubelet[2271]: I0510 02:17:18.077129 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4e48408-11ae-41e1-a9df-671daf11213c-tigera-ca-bundle\") pod \"a4e48408-11ae-41e1-a9df-671daf11213c\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " May 10 02:17:18.077350 kubelet[2271]: I0510 02:17:18.077318 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a4e48408-11ae-41e1-a9df-671daf11213c-node-certs\") pod \"a4e48408-11ae-41e1-a9df-671daf11213c\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " May 10 02:17:18.077694 kubelet[2271]: I0510 02:17:18.077548 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-lib-modules\") pod \"a4e48408-11ae-41e1-a9df-671daf11213c\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " May 10 02:17:18.077694 kubelet[2271]: I0510 02:17:18.077597 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcdfc\" (UniqueName: \"kubernetes.io/projected/a4e48408-11ae-41e1-a9df-671daf11213c-kube-api-access-dcdfc\") pod \"a4e48408-11ae-41e1-a9df-671daf11213c\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " May 10 02:17:18.077916 kubelet[2271]: I0510 02:17:18.077874 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-var-lib-calico\") pod \"a4e48408-11ae-41e1-a9df-671daf11213c\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " May 10 02:17:18.078090 kubelet[2271]: I0510 02:17:18.078062 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-cni-log-dir\") pod \"a4e48408-11ae-41e1-a9df-671daf11213c\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " May 10 02:17:18.078397 kubelet[2271]: I0510 02:17:18.078226 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-flexvol-driver-host\") pod \"a4e48408-11ae-41e1-a9df-671daf11213c\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " May 10 02:17:18.078397 kubelet[2271]: I0510 02:17:18.078260 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-var-run-calico\") pod \"a4e48408-11ae-41e1-a9df-671daf11213c\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " May 10 02:17:18.078397 kubelet[2271]: I0510 02:17:18.078293 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-cni-net-dir\") pod \"a4e48408-11ae-41e1-a9df-671daf11213c\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " May 10 02:17:18.080328 kubelet[2271]: I0510 02:17:18.080287 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-policysync\") pod \"a4e48408-11ae-41e1-a9df-671daf11213c\" (UID: \"a4e48408-11ae-41e1-a9df-671daf11213c\") " May 10 02:17:18.088594 kubelet[2271]: I0510 02:17:18.088550 2271 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-cni-bin-dir\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:18.088854 kubelet[2271]: I0510 02:17:18.088822 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-policysync" (OuterVolumeSpecName: "policysync") pod "a4e48408-11ae-41e1-a9df-671daf11213c" (UID: "a4e48408-11ae-41e1-a9df-671daf11213c"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 02:17:18.089049 kubelet[2271]: I0510 02:17:18.089004 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a4e48408-11ae-41e1-a9df-671daf11213c" (UID: "a4e48408-11ae-41e1-a9df-671daf11213c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 02:17:18.108306 kubelet[2271]: I0510 02:17:18.108251 2271 topology_manager.go:215] "Topology Admit Handler" podUID="577e8583-ba9d-49d5-b69b-cd46af3bc401" podNamespace="calico-system" podName="calico-node-fhrqx" May 10 02:17:18.127153 kubelet[2271]: E0510 02:17:18.127067 2271 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4e48408-11ae-41e1-a9df-671daf11213c" containerName="calico-node" May 10 02:17:18.127720 kubelet[2271]: E0510 02:17:18.127689 2271 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4e48408-11ae-41e1-a9df-671daf11213c" containerName="install-cni" May 10 02:17:18.127866 kubelet[2271]: E0510 02:17:18.127841 2271 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4e48408-11ae-41e1-a9df-671daf11213c" containerName="flexvol-driver" May 10 02:17:18.134892 systemd[1]: Started sshd@9-10.230.33.70:22-139.178.68.195:53132.service. May 10 02:17:18.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.33.70:22-139.178.68.195:53132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:18.149774 kernel: audit: type=1130 audit(1746843438.139:430): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.33.70:22-139.178.68.195:53132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:18.162131 kubelet[2271]: I0510 02:17:18.129158 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a4e48408-11ae-41e1-a9df-671daf11213c" (UID: "a4e48408-11ae-41e1-a9df-671daf11213c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 02:17:18.162664 kubelet[2271]: I0510 02:17:18.129777 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "a4e48408-11ae-41e1-a9df-671daf11213c" (UID: "a4e48408-11ae-41e1-a9df-671daf11213c"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 02:17:18.162837 kubelet[2271]: I0510 02:17:18.129806 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "a4e48408-11ae-41e1-a9df-671daf11213c" (UID: "a4e48408-11ae-41e1-a9df-671daf11213c"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 02:17:18.163359 kubelet[2271]: I0510 02:17:18.129829 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "a4e48408-11ae-41e1-a9df-671daf11213c" (UID: "a4e48408-11ae-41e1-a9df-671daf11213c"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 02:17:18.163672 kubelet[2271]: I0510 02:17:18.129868 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "a4e48408-11ae-41e1-a9df-671daf11213c" (UID: "a4e48408-11ae-41e1-a9df-671daf11213c"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 02:17:18.167855 kubelet[2271]: I0510 02:17:18.129938 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "a4e48408-11ae-41e1-a9df-671daf11213c" (UID: "a4e48408-11ae-41e1-a9df-671daf11213c"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 02:17:18.169280 kubelet[2271]: I0510 02:17:18.139147 2271 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e48408-11ae-41e1-a9df-671daf11213c" containerName="calico-node" May 10 02:17:18.177325 systemd[1]: var-lib-kubelet-pods-a4e48408\x2d11ae\x2d41e1\x2da9df\x2d671daf11213c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddcdfc.mount: Deactivated successfully. May 10 02:17:18.192366 kubelet[2271]: I0510 02:17:18.192326 2271 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-xtables-lock\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:18.192680 kubelet[2271]: I0510 02:17:18.192654 2271 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-lib-modules\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:18.193075 kubelet[2271]: I0510 02:17:18.193050 2271 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-var-lib-calico\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:18.193242 kubelet[2271]: I0510 02:17:18.193191 2271 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-cni-log-dir\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:18.193386 kubelet[2271]: I0510 02:17:18.193361 2271 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-cni-net-dir\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:18.193529 kubelet[2271]: I0510 02:17:18.193503 2271 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-policysync\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:18.205758 kubelet[2271]: I0510 02:17:18.205720 2271 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-flexvol-driver-host\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:18.205996 kubelet[2271]: I0510 02:17:18.205969 2271 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a4e48408-11ae-41e1-a9df-671daf11213c-var-run-calico\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:18.206111 kubelet[2271]: I0510 02:17:18.192906 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4e48408-11ae-41e1-a9df-671daf11213c-kube-api-access-dcdfc" (OuterVolumeSpecName: "kube-api-access-dcdfc") pod "a4e48408-11ae-41e1-a9df-671daf11213c" (UID: "a4e48408-11ae-41e1-a9df-671daf11213c"). InnerVolumeSpecName "kube-api-access-dcdfc". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 02:17:18.235218 systemd[1]: var-lib-kubelet-pods-a4e48408\x2d11ae\x2d41e1\x2da9df\x2d671daf11213c-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 10 02:17:18.243352 kubelet[2271]: I0510 02:17:18.243319 2271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" May 10 02:17:18.258847 kubelet[2271]: I0510 02:17:18.258816 2271 scope.go:117] "RemoveContainer" containerID="0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d" May 10 02:17:18.299382 systemd[1]: var-lib-kubelet-pods-a4e48408\x2d11ae\x2d41e1\x2da9df\x2d671daf11213c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 10 02:17:18.312559 kubelet[2271]: I0510 02:17:18.312519 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e48408-11ae-41e1-a9df-671daf11213c-node-certs" (OuterVolumeSpecName: "node-certs") pod "a4e48408-11ae-41e1-a9df-671daf11213c" (UID: "a4e48408-11ae-41e1-a9df-671daf11213c"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 02:17:18.314511 kubelet[2271]: I0510 02:17:18.314462 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/577e8583-ba9d-49d5-b69b-cd46af3bc401-tigera-ca-bundle\") pod \"calico-node-fhrqx\" (UID: \"577e8583-ba9d-49d5-b69b-cd46af3bc401\") " pod="calico-system/calico-node-fhrqx" May 10 02:17:18.314703 kubelet[2271]: I0510 02:17:18.314675 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/577e8583-ba9d-49d5-b69b-cd46af3bc401-xtables-lock\") pod \"calico-node-fhrqx\" (UID: \"577e8583-ba9d-49d5-b69b-cd46af3bc401\") " pod="calico-system/calico-node-fhrqx" May 10 02:17:18.314874 kubelet[2271]: I0510 02:17:18.314846 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/577e8583-ba9d-49d5-b69b-cd46af3bc401-var-lib-calico\") pod \"calico-node-fhrqx\" (UID: \"577e8583-ba9d-49d5-b69b-cd46af3bc401\") " pod="calico-system/calico-node-fhrqx" May 10 02:17:18.315043 kubelet[2271]: I0510 02:17:18.315015 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/577e8583-ba9d-49d5-b69b-cd46af3bc401-cni-net-dir\") pod \"calico-node-fhrqx\" (UID: \"577e8583-ba9d-49d5-b69b-cd46af3bc401\") " pod="calico-system/calico-node-fhrqx" May 10 02:17:18.315240 kubelet[2271]: I0510 02:17:18.315199 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/577e8583-ba9d-49d5-b69b-cd46af3bc401-policysync\") pod \"calico-node-fhrqx\" (UID: \"577e8583-ba9d-49d5-b69b-cd46af3bc401\") " pod="calico-system/calico-node-fhrqx" May 10 02:17:18.315854 kubelet[2271]: I0510 02:17:18.315484 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/577e8583-ba9d-49d5-b69b-cd46af3bc401-var-run-calico\") pod \"calico-node-fhrqx\" (UID: \"577e8583-ba9d-49d5-b69b-cd46af3bc401\") " pod="calico-system/calico-node-fhrqx" May 10 02:17:18.315966 env[1300]: 2025-05-10 02:17:17.776 [INFO][5284] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" May 10 02:17:18.315966 env[1300]: 2025-05-10 02:17:17.784 [INFO][5284] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" iface="eth0" netns="/var/run/netns/cni-e3a12aca-27f6-0877-2312-46281d6f2266" May 10 02:17:18.315966 env[1300]: 2025-05-10 02:17:17.784 [INFO][5284] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" iface="eth0" netns="/var/run/netns/cni-e3a12aca-27f6-0877-2312-46281d6f2266" May 10 02:17:18.315966 env[1300]: 2025-05-10 02:17:17.818 [INFO][5284] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" after=34.119805ms iface="eth0" netns="/var/run/netns/cni-e3a12aca-27f6-0877-2312-46281d6f2266" May 10 02:17:18.315966 env[1300]: 2025-05-10 02:17:17.818 [INFO][5284] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" May 10 02:17:18.315966 env[1300]: 2025-05-10 02:17:17.818 [INFO][5284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" May 10 02:17:18.315966 env[1300]: 2025-05-10 02:17:18.134 [INFO][5317] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" HandleID="k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:17:18.315966 env[1300]: 2025-05-10 02:17:18.140 [INFO][5317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:18.315966 env[1300]: 2025-05-10 02:17:18.159 [INFO][5317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:18.315966 env[1300]: 2025-05-10 02:17:18.283 [INFO][5317] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" HandleID="k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:17:18.315966 env[1300]: 2025-05-10 02:17:18.283 [INFO][5317] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" HandleID="k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:17:18.315966 env[1300]: 2025-05-10 02:17:18.286 [INFO][5317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:18.315966 env[1300]: 2025-05-10 02:17:18.301 [INFO][5284] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" May 10 02:17:18.319966 env[1300]: time="2025-05-10T02:17:18.316035209Z" level=info msg="TearDown network for sandbox \"61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0\" successfully" May 10 02:17:18.319966 env[1300]: time="2025-05-10T02:17:18.316105977Z" level=info msg="StopPodSandbox for \"61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0\" returns successfully" May 10 02:17:18.320118 kubelet[2271]: I0510 02:17:18.315586 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/577e8583-ba9d-49d5-b69b-cd46af3bc401-cni-bin-dir\") pod \"calico-node-fhrqx\" (UID: \"577e8583-ba9d-49d5-b69b-cd46af3bc401\") " pod="calico-system/calico-node-fhrqx" May 10 02:17:18.320118 kubelet[2271]: I0510 02:17:18.316804 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs8l4\" (UniqueName: \"kubernetes.io/projected/577e8583-ba9d-49d5-b69b-cd46af3bc401-kube-api-access-fs8l4\") pod \"calico-node-fhrqx\" (UID: \"577e8583-ba9d-49d5-b69b-cd46af3bc401\") " pod="calico-system/calico-node-fhrqx" May 10 02:17:18.320118 kubelet[2271]: I0510 02:17:18.316848 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/577e8583-ba9d-49d5-b69b-cd46af3bc401-cni-log-dir\") pod \"calico-node-fhrqx\" (UID: \"577e8583-ba9d-49d5-b69b-cd46af3bc401\") " pod="calico-system/calico-node-fhrqx" May 10 02:17:18.320118 kubelet[2271]: I0510 02:17:18.317404 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/577e8583-ba9d-49d5-b69b-cd46af3bc401-flexvol-driver-host\") pod \"calico-node-fhrqx\" (UID: \"577e8583-ba9d-49d5-b69b-cd46af3bc401\") " pod="calico-system/calico-node-fhrqx" May 10 02:17:18.320118 kubelet[2271]: I0510 02:17:18.317454 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/577e8583-ba9d-49d5-b69b-cd46af3bc401-lib-modules\") pod \"calico-node-fhrqx\" (UID: \"577e8583-ba9d-49d5-b69b-cd46af3bc401\") " pod="calico-system/calico-node-fhrqx" May 10 02:17:18.320486 kubelet[2271]: I0510 02:17:18.317528 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/577e8583-ba9d-49d5-b69b-cd46af3bc401-node-certs\") pod \"calico-node-fhrqx\" (UID: \"577e8583-ba9d-49d5-b69b-cd46af3bc401\") " pod="calico-system/calico-node-fhrqx" May 10 02:17:18.320672 kubelet[2271]: I0510 02:17:18.320626 2271 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a4e48408-11ae-41e1-a9df-671daf11213c-node-certs\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:18.322409 kubelet[2271]: I0510 02:17:18.322373 2271 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dcdfc\" (UniqueName: \"kubernetes.io/projected/a4e48408-11ae-41e1-a9df-671daf11213c-kube-api-access-dcdfc\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:18.324952 kubelet[2271]: I0510 02:17:18.324864 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e48408-11ae-41e1-a9df-671daf11213c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "a4e48408-11ae-41e1-a9df-671daf11213c" (UID: "a4e48408-11ae-41e1-a9df-671daf11213c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 02:17:18.336649 env[1300]: time="2025-05-10T02:17:18.336573393Z" level=info msg="RemoveContainer for \"0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d\"" May 10 02:17:18.345028 env[1300]: time="2025-05-10T02:17:18.344978029Z" level=info msg="RemoveContainer for \"0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d\" returns successfully" May 10 02:17:18.353454 kubelet[2271]: I0510 02:17:18.353424 2271 scope.go:117] "RemoveContainer" containerID="fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d" May 10 02:17:18.356458 env[1300]: time="2025-05-10T02:17:18.356084549Z" level=info msg="RemoveContainer for \"fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d\"" May 10 02:17:18.362181 env[1300]: time="2025-05-10T02:17:18.362143843Z" level=info msg="RemoveContainer for \"fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d\" returns successfully" May 10 02:17:18.372564 kubelet[2271]: I0510 02:17:18.368320 2271 scope.go:117] "RemoveContainer" containerID="c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e" May 10 02:17:18.382135 env[1300]: time="2025-05-10T02:17:18.382031482Z" level=info msg="RemoveContainer for \"c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e\"" May 10 02:17:18.387329 env[1300]: time="2025-05-10T02:17:18.387284721Z" level=info msg="RemoveContainer for \"c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e\" returns successfully" May 10 02:17:18.387740 kubelet[2271]: I0510 02:17:18.387712 2271 scope.go:117] "RemoveContainer" containerID="0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d" May 10 02:17:18.388262 env[1300]: time="2025-05-10T02:17:18.388077514Z" level=error msg="ContainerStatus for \"0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d\": not found" May 10 02:17:18.390705 kubelet[2271]: E0510 02:17:18.390620 2271 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d\": not found" containerID="0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d" May 10 02:17:18.391005 kubelet[2271]: I0510 02:17:18.390937 2271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d"} err="failed to get container status \"0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b8ba55c9d85e62986cd055a78d5a7cac8fab8a1ea1a74129295ac2ed21da78d\": not found" May 10 02:17:18.391147 kubelet[2271]: I0510 02:17:18.391106 2271 scope.go:117] "RemoveContainer" containerID="fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d" May 10 02:17:18.391820 env[1300]: time="2025-05-10T02:17:18.391690134Z" level=error msg="ContainerStatus for \"fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d\": not found" May 10 02:17:18.392191 kubelet[2271]: E0510 02:17:18.392161 2271 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d\": not found" containerID="fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d" May 10 02:17:18.392517 kubelet[2271]: I0510 02:17:18.392462 2271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d"} err="failed to get container status \"fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe3b93f9162115199bb40220d64fcd7c27f2ccd9da9cd4e707d65b698b8b270d\": not found" May 10 02:17:18.392680 kubelet[2271]: I0510 02:17:18.392655 2271 scope.go:117] "RemoveContainer" containerID="c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e" May 10 02:17:18.395871 env[1300]: time="2025-05-10T02:17:18.395695755Z" level=error msg="ContainerStatus for \"c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e\": not found" May 10 02:17:18.396277 kubelet[2271]: E0510 02:17:18.395977 2271 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e\": not found" containerID="c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e" May 10 02:17:18.396375 kubelet[2271]: I0510 02:17:18.396288 2271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e"} err="failed to get container status \"c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9d10e1a9a99acb392aa0dec59183ba3780b88abb8c33d067e5ab8e288f28e4e\": not found" May 10 02:17:18.423334 kubelet[2271]: I0510 02:17:18.423299 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrzq8\" (UniqueName: \"kubernetes.io/projected/26d91028-bb97-4013-8daa-28d750dcefc1-kube-api-access-qrzq8\") pod \"26d91028-bb97-4013-8daa-28d750dcefc1\" (UID: \"26d91028-bb97-4013-8daa-28d750dcefc1\") " May 10 02:17:18.423604 kubelet[2271]: I0510 02:17:18.423541 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26d91028-bb97-4013-8daa-28d750dcefc1-tigera-ca-bundle\") pod \"26d91028-bb97-4013-8daa-28d750dcefc1\" (UID: \"26d91028-bb97-4013-8daa-28d750dcefc1\") " May 10 02:17:18.425366 kubelet[2271]: I0510 02:17:18.425339 2271 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4e48408-11ae-41e1-a9df-671daf11213c-tigera-ca-bundle\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:18.447232 kubelet[2271]: I0510 02:17:18.447152 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26d91028-bb97-4013-8daa-28d750dcefc1-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "26d91028-bb97-4013-8daa-28d750dcefc1" (UID: "26d91028-bb97-4013-8daa-28d750dcefc1"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 02:17:18.451483 kubelet[2271]: I0510 02:17:18.451439 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d91028-bb97-4013-8daa-28d750dcefc1-kube-api-access-qrzq8" (OuterVolumeSpecName: "kube-api-access-qrzq8") pod "26d91028-bb97-4013-8daa-28d750dcefc1" (UID: "26d91028-bb97-4013-8daa-28d750dcefc1"). InnerVolumeSpecName "kube-api-access-qrzq8". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 02:17:18.528586 kubelet[2271]: I0510 02:17:18.528538 2271 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qrzq8\" (UniqueName: \"kubernetes.io/projected/26d91028-bb97-4013-8daa-28d750dcefc1-kube-api-access-qrzq8\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:18.529230 kubelet[2271]: I0510 02:17:18.528847 2271 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26d91028-bb97-4013-8daa-28d750dcefc1-tigera-ca-bundle\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:18.617684 env[1300]: time="2025-05-10T02:17:18.613812304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fhrqx,Uid:577e8583-ba9d-49d5-b69b-cd46af3bc401,Namespace:calico-system,Attempt:0,}" May 10 02:17:18.643057 env[1300]: time="2025-05-10T02:17:18.642920403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 02:17:18.645799 env[1300]: time="2025-05-10T02:17:18.643078036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 02:17:18.645799 env[1300]: time="2025-05-10T02:17:18.643153119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 02:17:18.646240 env[1300]: time="2025-05-10T02:17:18.646177339Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e02dd55ef9083155215889253e7c3d6f5079262532b0bdd95a34297fac0d81b7 pid=5375 runtime=io.containerd.runc.v2 May 10 02:17:18.775053 env[1300]: time="2025-05-10T02:17:18.774978491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fhrqx,Uid:577e8583-ba9d-49d5-b69b-cd46af3bc401,Namespace:calico-system,Attempt:0,} returns sandbox id \"e02dd55ef9083155215889253e7c3d6f5079262532b0bdd95a34297fac0d81b7\"" May 10 02:17:18.798666 env[1300]: time="2025-05-10T02:17:18.795802183Z" level=info msg="CreateContainer within sandbox \"e02dd55ef9083155215889253e7c3d6f5079262532b0bdd95a34297fac0d81b7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 10 02:17:18.827818 env[1300]: time="2025-05-10T02:17:18.827769928Z" level=info msg="CreateContainer within sandbox \"e02dd55ef9083155215889253e7c3d6f5079262532b0bdd95a34297fac0d81b7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ea5aaaa62f7227de7a167c736cf8e9fa7a69ae006c95bdde6babadfadfa5e1cc\"" May 10 02:17:18.829151 env[1300]: time="2025-05-10T02:17:18.829104426Z" level=info msg="StartContainer for \"ea5aaaa62f7227de7a167c736cf8e9fa7a69ae006c95bdde6babadfadfa5e1cc\"" May 10 02:17:18.921268 systemd[1]: var-lib-kubelet-pods-26d91028\x2dbb97\x2d4013\x2d8daa\x2d28d750dcefc1-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. May 10 02:17:18.921551 systemd[1]: run-netns-cni\x2de3a12aca\x2d27f6\x2d0877\x2d2312\x2d46281d6f2266.mount: Deactivated successfully. May 10 02:17:18.921823 systemd[1]: var-lib-kubelet-pods-26d91028\x2dbb97\x2d4013\x2d8daa\x2d28d750dcefc1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqrzq8.mount: Deactivated successfully. May 10 02:17:19.006564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9995f89b5a405fefff0e961c6d464c4d0af7232fb574d39abcbd915393645eff-rootfs.mount: Deactivated successfully. May 10 02:17:19.012293 env[1300]: time="2025-05-10T02:17:19.012178019Z" level=info msg="shim disconnected" id=9995f89b5a405fefff0e961c6d464c4d0af7232fb574d39abcbd915393645eff May 10 02:17:19.012519 env[1300]: time="2025-05-10T02:17:19.012470482Z" level=warning msg="cleaning up after shim disconnected" id=9995f89b5a405fefff0e961c6d464c4d0af7232fb574d39abcbd915393645eff namespace=k8s.io May 10 02:17:19.012743 env[1300]: time="2025-05-10T02:17:19.012714547Z" level=info msg="cleaning up dead shim" May 10 02:17:19.046475 env[1300]: time="2025-05-10T02:17:19.046377070Z" level=warning msg="cleanup warnings time=\"2025-05-10T02:17:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5444 runtime=io.containerd.runc.v2\n" May 10 02:17:19.089822 env[1300]: time="2025-05-10T02:17:19.089568963Z" level=info msg="StopContainer for \"9995f89b5a405fefff0e961c6d464c4d0af7232fb574d39abcbd915393645eff\" returns successfully" May 10 02:17:19.092400 env[1300]: time="2025-05-10T02:17:19.092361922Z" level=info msg="StopPodSandbox for \"3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2\"" May 10 02:17:19.092818 env[1300]: time="2025-05-10T02:17:19.092776697Z" level=info msg="Container to stop \"9995f89b5a405fefff0e961c6d464c4d0af7232fb574d39abcbd915393645eff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 02:17:19.097147 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2-shm.mount: Deactivated successfully. May 10 02:17:19.123167 env[1300]: time="2025-05-10T02:17:19.123086150Z" level=info msg="StartContainer for \"ea5aaaa62f7227de7a167c736cf8e9fa7a69ae006c95bdde6babadfadfa5e1cc\" returns successfully" May 10 02:17:19.189000 audit[5361]: USER_ACCT pid=5361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:19.202298 sshd[5361]: Accepted publickey for core from 139.178.68.195 port 53132 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:17:19.205837 kernel: audit: type=1101 audit(1746843439.189:431): pid=5361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:19.213009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2-rootfs.mount: Deactivated successfully. May 10 02:17:19.214342 sshd[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:17:19.206000 audit[5361]: CRED_ACQ pid=5361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:19.235706 kernel: audit: type=1103 audit(1746843439.206:432): pid=5361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:19.236846 env[1300]: time="2025-05-10T02:17:19.236692451Z" level=info msg="shim disconnected" id=3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2 May 10 02:17:19.237278 env[1300]: time="2025-05-10T02:17:19.237219819Z" level=warning msg="cleaning up after shim disconnected" id=3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2 namespace=k8s.io May 10 02:17:19.237433 env[1300]: time="2025-05-10T02:17:19.237403196Z" level=info msg="cleaning up dead shim" May 10 02:17:19.263471 kernel: audit: type=1006 audit(1746843439.206:433): pid=5361 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 May 10 02:17:19.206000 audit[5361]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda85e6be0 a2=3 a3=0 items=0 ppid=1 pid=5361 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:19.206000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:17:19.263157 systemd[1]: Started session-10.scope. May 10 02:17:19.264351 systemd-logind[1288]: New session 10 of user core. May 10 02:17:19.304000 audit[5361]: USER_START pid=5361 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:19.307000 audit[5500]: CRED_ACQ pid=5500 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:19.342501 env[1300]: time="2025-05-10T02:17:19.342423885Z" level=warning msg="cleanup warnings time=\"2025-05-10T02:17:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5489 runtime=io.containerd.runc.v2\n" May 10 02:17:19.354570 env[1300]: time="2025-05-10T02:17:19.354484423Z" level=info msg="TearDown network for sandbox \"3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2\" successfully" May 10 02:17:19.354883 env[1300]: time="2025-05-10T02:17:19.354847791Z" level=info msg="StopPodSandbox for \"3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2\" returns successfully" May 10 02:17:19.419353 env[1300]: time="2025-05-10T02:17:19.419278743Z" level=info msg="shim disconnected" id=ea5aaaa62f7227de7a167c736cf8e9fa7a69ae006c95bdde6babadfadfa5e1cc May 10 02:17:19.419826 env[1300]: time="2025-05-10T02:17:19.419770238Z" level=warning msg="cleaning up after shim disconnected" id=ea5aaaa62f7227de7a167c736cf8e9fa7a69ae006c95bdde6babadfadfa5e1cc namespace=k8s.io May 10 02:17:19.420039 env[1300]: time="2025-05-10T02:17:19.419998253Z" level=info msg="cleaning up dead shim" May 10 02:17:19.455688 env[1300]: time="2025-05-10T02:17:19.455422885Z" level=warning msg="cleanup warnings time=\"2025-05-10T02:17:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5529 runtime=io.containerd.runc.v2\n" May 10 02:17:19.562038 kubelet[2271]: I0510 02:17:19.561954 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/56dada48-8879-40a3-af1d-c6ab39542dfa-typha-certs\") pod \"56dada48-8879-40a3-af1d-c6ab39542dfa\" (UID: \"56dada48-8879-40a3-af1d-c6ab39542dfa\") " May 10 02:17:19.563092 kubelet[2271]: I0510 02:17:19.563060 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfhtq\" (UniqueName: \"kubernetes.io/projected/56dada48-8879-40a3-af1d-c6ab39542dfa-kube-api-access-nfhtq\") pod \"56dada48-8879-40a3-af1d-c6ab39542dfa\" (UID: \"56dada48-8879-40a3-af1d-c6ab39542dfa\") " May 10 02:17:19.563264 kubelet[2271]: I0510 02:17:19.563218 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56dada48-8879-40a3-af1d-c6ab39542dfa-tigera-ca-bundle\") pod \"56dada48-8879-40a3-af1d-c6ab39542dfa\" (UID: \"56dada48-8879-40a3-af1d-c6ab39542dfa\") " May 10 02:17:19.573931 kubelet[2271]: I0510 02:17:19.571077 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56dada48-8879-40a3-af1d-c6ab39542dfa-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "56dada48-8879-40a3-af1d-c6ab39542dfa" (UID: "56dada48-8879-40a3-af1d-c6ab39542dfa"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 02:17:19.586499 kubelet[2271]: I0510 02:17:19.586408 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56dada48-8879-40a3-af1d-c6ab39542dfa-kube-api-access-nfhtq" (OuterVolumeSpecName: "kube-api-access-nfhtq") pod "56dada48-8879-40a3-af1d-c6ab39542dfa" (UID: "56dada48-8879-40a3-af1d-c6ab39542dfa"). InnerVolumeSpecName "kube-api-access-nfhtq". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 02:17:19.592491 kubelet[2271]: I0510 02:17:19.592453 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56dada48-8879-40a3-af1d-c6ab39542dfa-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "56dada48-8879-40a3-af1d-c6ab39542dfa" (UID: "56dada48-8879-40a3-af1d-c6ab39542dfa"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 02:17:19.663994 kubelet[2271]: I0510 02:17:19.663914 2271 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/56dada48-8879-40a3-af1d-c6ab39542dfa-typha-certs\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:19.664435 kubelet[2271]: I0510 02:17:19.664407 2271 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nfhtq\" (UniqueName: \"kubernetes.io/projected/56dada48-8879-40a3-af1d-c6ab39542dfa-kube-api-access-nfhtq\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:19.664649 kubelet[2271]: I0510 02:17:19.664599 2271 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56dada48-8879-40a3-af1d-c6ab39542dfa-tigera-ca-bundle\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:19.723000 audit[5550]: NETFILTER_CFG table=filter:124 family=2 entries=9 op=nft_register_rule pid=5550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:19.723000 audit[5550]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffcefb28820 a2=0 a3=7ffcefb2880c items=0 ppid=2455 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:19.723000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:19.730000 audit[5550]: NETFILTER_CFG table=nat:125 family=2 entries=27 op=nft_unregister_chain pid=5550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:19.730000 audit[5550]: SYSCALL arch=c000003e syscall=46 success=yes exit=6028 a0=3 a1=7ffcefb28820 a2=0 a3=7ffcefb2880c items=0 ppid=2455 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:19.730000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:19.779000 audit[5552]: NETFILTER_CFG table=filter:126 family=2 entries=10 op=nft_register_rule pid=5552 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:19.779000 audit[5552]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffedf447fc0 a2=0 a3=7ffedf447fac items=0 ppid=2455 pid=5552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:19.779000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:19.792000 audit[5552]: NETFILTER_CFG table=nat:127 family=2 entries=20 op=nft_register_rule pid=5552 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:19.792000 audit[5552]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffedf447fc0 a2=0 a3=7ffedf447fac items=0 ppid=2455 pid=5552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:19.792000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:19.907812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea5aaaa62f7227de7a167c736cf8e9fa7a69ae006c95bdde6babadfadfa5e1cc-rootfs.mount: Deactivated successfully. May 10 02:17:19.908621 systemd[1]: var-lib-kubelet-pods-56dada48\x2d8879\x2d40a3\x2daf1d\x2dc6ab39542dfa-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. May 10 02:17:19.909015 systemd[1]: var-lib-kubelet-pods-56dada48\x2d8879\x2d40a3\x2daf1d\x2dc6ab39542dfa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnfhtq.mount: Deactivated successfully. May 10 02:17:19.909462 systemd[1]: var-lib-kubelet-pods-56dada48\x2d8879\x2d40a3\x2daf1d\x2dc6ab39542dfa-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. May 10 02:17:20.326069 kubelet[2271]: I0510 02:17:20.325989 2271 scope.go:117] "RemoveContainer" containerID="9995f89b5a405fefff0e961c6d464c4d0af7232fb574d39abcbd915393645eff" May 10 02:17:20.365689 env[1300]: time="2025-05-10T02:17:20.365237249Z" level=info msg="CreateContainer within sandbox \"e02dd55ef9083155215889253e7c3d6f5079262532b0bdd95a34297fac0d81b7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 10 02:17:20.368407 env[1300]: time="2025-05-10T02:17:20.367599539Z" level=info msg="RemoveContainer for \"9995f89b5a405fefff0e961c6d464c4d0af7232fb574d39abcbd915393645eff\"" May 10 02:17:20.375113 env[1300]: time="2025-05-10T02:17:20.375068057Z" level=info msg="RemoveContainer for \"9995f89b5a405fefff0e961c6d464c4d0af7232fb574d39abcbd915393645eff\" returns successfully" May 10 02:17:20.417040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount552492430.mount: Deactivated successfully. May 10 02:17:20.447115 env[1300]: time="2025-05-10T02:17:20.446845569Z" level=info msg="CreateContainer within sandbox \"e02dd55ef9083155215889253e7c3d6f5079262532b0bdd95a34297fac0d81b7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"22257b378e40912c9aa43c335ab77b5f88addbf52dc59b888722aaf704fb6fa1\"" May 10 02:17:20.449648 env[1300]: time="2025-05-10T02:17:20.448352687Z" level=info msg="StartContainer for \"22257b378e40912c9aa43c335ab77b5f88addbf52dc59b888722aaf704fb6fa1\"" May 10 02:17:20.532475 kubelet[2271]: I0510 02:17:20.530607 2271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26d91028-bb97-4013-8daa-28d750dcefc1" path="/var/lib/kubelet/pods/26d91028-bb97-4013-8daa-28d750dcefc1/volumes" May 10 02:17:20.535378 kubelet[2271]: I0510 02:17:20.535126 2271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56dada48-8879-40a3-af1d-c6ab39542dfa" path="/var/lib/kubelet/pods/56dada48-8879-40a3-af1d-c6ab39542dfa/volumes" May 10 02:17:20.537720 kubelet[2271]: I0510 02:17:20.537683 2271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4e48408-11ae-41e1-a9df-671daf11213c" path="/var/lib/kubelet/pods/a4e48408-11ae-41e1-a9df-671daf11213c/volumes" May 10 02:17:20.568170 sshd[5361]: pam_unix(sshd:session): session closed for user core May 10 02:17:20.571000 audit[5361]: USER_END pid=5361 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:20.571000 audit[5361]: CRED_DISP pid=5361 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:20.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.33.70:22-139.178.68.195:53132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:20.574779 systemd[1]: sshd@9-10.230.33.70:22-139.178.68.195:53132.service: Deactivated successfully. May 10 02:17:20.576297 systemd[1]: session-10.scope: Deactivated successfully. May 10 02:17:20.580470 systemd-logind[1288]: Session 10 logged out. Waiting for processes to exit. May 10 02:17:20.586741 systemd-logind[1288]: Removed session 10. May 10 02:17:20.640381 env[1300]: time="2025-05-10T02:17:20.640310310Z" level=info msg="StartContainer for \"22257b378e40912c9aa43c335ab77b5f88addbf52dc59b888722aaf704fb6fa1\" returns successfully" May 10 02:17:20.908248 systemd[1]: run-containerd-runc-k8s.io-22257b378e40912c9aa43c335ab77b5f88addbf52dc59b888722aaf704fb6fa1-runc.tScBuM.mount: Deactivated successfully. May 10 02:17:23.158158 env[1300]: time="2025-05-10T02:17:23.158074075Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/10-calico.conflist\": WRITE)" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" May 10 02:17:23.212722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22257b378e40912c9aa43c335ab77b5f88addbf52dc59b888722aaf704fb6fa1-rootfs.mount: Deactivated successfully. May 10 02:17:23.223073 env[1300]: time="2025-05-10T02:17:23.223003166Z" level=info msg="shim disconnected" id=22257b378e40912c9aa43c335ab77b5f88addbf52dc59b888722aaf704fb6fa1 May 10 02:17:23.223534 env[1300]: time="2025-05-10T02:17:23.223357509Z" level=warning msg="cleaning up after shim disconnected" id=22257b378e40912c9aa43c335ab77b5f88addbf52dc59b888722aaf704fb6fa1 namespace=k8s.io May 10 02:17:23.224885 env[1300]: time="2025-05-10T02:17:23.223402531Z" level=info msg="cleaning up dead shim" May 10 02:17:23.248789 env[1300]: time="2025-05-10T02:17:23.248684426Z" level=warning msg="cleanup warnings time=\"2025-05-10T02:17:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5613 runtime=io.containerd.runc.v2\n" May 10 02:17:23.571906 env[1300]: time="2025-05-10T02:17:23.571252944Z" level=info msg="CreateContainer within sandbox \"e02dd55ef9083155215889253e7c3d6f5079262532b0bdd95a34297fac0d81b7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 10 02:17:23.592102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2826001567.mount: Deactivated successfully. May 10 02:17:23.621415 env[1300]: time="2025-05-10T02:17:23.621346658Z" level=info msg="CreateContainer within sandbox \"e02dd55ef9083155215889253e7c3d6f5079262532b0bdd95a34297fac0d81b7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"084543f0ca5a1043693087c72a8097c079136ce53312b00fcfe95383ad2dbfc4\"" May 10 02:17:23.625583 env[1300]: time="2025-05-10T02:17:23.625546797Z" level=info msg="StartContainer for \"084543f0ca5a1043693087c72a8097c079136ce53312b00fcfe95383ad2dbfc4\"" May 10 02:17:23.781530 env[1300]: time="2025-05-10T02:17:23.781457956Z" level=info msg="StartContainer for \"084543f0ca5a1043693087c72a8097c079136ce53312b00fcfe95383ad2dbfc4\" returns successfully" May 10 02:17:24.299989 kubelet[2271]: I0510 02:17:24.299874 2271 topology_manager.go:215] "Topology Admit Handler" podUID="f1a040b5-a640-437f-a564-ee0f86fc8621" podNamespace="calico-system" podName="calico-kube-controllers-74d7f4cf98-684md" May 10 02:17:24.310736 kubelet[2271]: E0510 02:17:24.310696 2271 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26d91028-bb97-4013-8daa-28d750dcefc1" containerName="calico-kube-controllers" May 10 02:17:24.311143 kubelet[2271]: E0510 02:17:24.310896 2271 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56dada48-8879-40a3-af1d-c6ab39542dfa" containerName="calico-typha" May 10 02:17:24.311143 kubelet[2271]: I0510 02:17:24.310980 2271 memory_manager.go:354] "RemoveStaleState removing state" podUID="56dada48-8879-40a3-af1d-c6ab39542dfa" containerName="calico-typha" May 10 02:17:24.311143 kubelet[2271]: I0510 02:17:24.311003 2271 memory_manager.go:354] "RemoveStaleState removing state" podUID="26d91028-bb97-4013-8daa-28d750dcefc1" containerName="calico-kube-controllers" May 10 02:17:24.436452 kubelet[2271]: I0510 02:17:24.436170 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75djv\" (UniqueName: \"kubernetes.io/projected/f1a040b5-a640-437f-a564-ee0f86fc8621-kube-api-access-75djv\") pod \"calico-kube-controllers-74d7f4cf98-684md\" (UID: \"f1a040b5-a640-437f-a564-ee0f86fc8621\") " pod="calico-system/calico-kube-controllers-74d7f4cf98-684md" May 10 02:17:24.436452 kubelet[2271]: I0510 02:17:24.436288 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1a040b5-a640-437f-a564-ee0f86fc8621-tigera-ca-bundle\") pod \"calico-kube-controllers-74d7f4cf98-684md\" (UID: \"f1a040b5-a640-437f-a564-ee0f86fc8621\") " pod="calico-system/calico-kube-controllers-74d7f4cf98-684md" May 10 02:17:24.517265 kubelet[2271]: I0510 02:17:24.511436 2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fhrqx" podStartSLOduration=6.49750879 podStartE2EDuration="6.49750879s" podCreationTimestamp="2025-05-10 02:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 02:17:24.497236473 +0000 UTC m=+84.280418007" watchObservedRunningTime="2025-05-10 02:17:24.49750879 +0000 UTC m=+84.280690326" May 10 02:17:24.543033 systemd[1]: run-containerd-runc-k8s.io-084543f0ca5a1043693087c72a8097c079136ce53312b00fcfe95383ad2dbfc4-runc.HAGsDh.mount: Deactivated successfully. May 10 02:17:24.633235 env[1300]: time="2025-05-10T02:17:24.632987756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d7f4cf98-684md,Uid:f1a040b5-a640-437f-a564-ee0f86fc8621,Namespace:calico-system,Attempt:0,}" May 10 02:17:25.006718 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 10 02:17:25.009118 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali84d587568ba: link becomes ready May 10 02:17:25.003941 systemd-networkd[1074]: cali84d587568ba: Link UP May 10 02:17:25.009839 systemd-networkd[1074]: cali84d587568ba: Gained carrier May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.812 [INFO][5708] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--74d7f4cf98--684md-eth0 calico-kube-controllers-74d7f4cf98- calico-system f1a040b5-a640-437f-a564-ee0f86fc8621 1136 0 2025-05-10 02:17:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74d7f4cf98 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-it8yl.gb1.brightbox.com calico-kube-controllers-74d7f4cf98-684md eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali84d587568ba [] []}} ContainerID="fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" Namespace="calico-system" Pod="calico-kube-controllers-74d7f4cf98-684md" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--74d7f4cf98--684md-" May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.816 [INFO][5708] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" Namespace="calico-system" Pod="calico-kube-controllers-74d7f4cf98-684md" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--74d7f4cf98--684md-eth0" May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.906 [INFO][5724] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" HandleID="k8s-pod-network.fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--74d7f4cf98--684md-eth0" May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.922 [INFO][5724] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" HandleID="k8s-pod-network.fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--74d7f4cf98--684md-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031ab30), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-it8yl.gb1.brightbox.com", "pod":"calico-kube-controllers-74d7f4cf98-684md", "timestamp":"2025-05-10 02:17:24.906528605 +0000 UTC"}, Hostname:"srv-it8yl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.922 [INFO][5724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.922 [INFO][5724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.923 [INFO][5724] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-it8yl.gb1.brightbox.com' May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.926 [INFO][5724] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.939 [INFO][5724] ipam/ipam.go 372: Looking up existing affinities for host host="srv-it8yl.gb1.brightbox.com" May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.956 [INFO][5724] ipam/ipam.go 489: Trying affinity for 192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.962 [INFO][5724] ipam/ipam.go 155: Attempting to load block cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.968 [INFO][5724] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.968 [INFO][5724] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.971 [INFO][5724] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.978 [INFO][5724] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.988 [INFO][5724] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.123.136/26] block=192.168.123.128/26 handle="k8s-pod-network.fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.988 [INFO][5724] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.123.136/26] handle="k8s-pod-network.fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.988 [INFO][5724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:25.042611 env[1300]: 2025-05-10 02:17:24.988 [INFO][5724] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.136/26] IPv6=[] ContainerID="fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" HandleID="k8s-pod-network.fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--74d7f4cf98--684md-eth0" May 10 02:17:25.045539 env[1300]: 2025-05-10 02:17:24.992 [INFO][5708] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" Namespace="calico-system" Pod="calico-kube-controllers-74d7f4cf98-684md" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--74d7f4cf98--684md-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--74d7f4cf98--684md-eth0", GenerateName:"calico-kube-controllers-74d7f4cf98-", Namespace:"calico-system", SelfLink:"", UID:"f1a040b5-a640-437f-a564-ee0f86fc8621", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74d7f4cf98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-74d7f4cf98-684md", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali84d587568ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:25.045539 env[1300]: 2025-05-10 02:17:24.992 [INFO][5708] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.123.136/32] ContainerID="fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" Namespace="calico-system" Pod="calico-kube-controllers-74d7f4cf98-684md" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--74d7f4cf98--684md-eth0" May 10 02:17:25.045539 env[1300]: 2025-05-10 02:17:24.992 [INFO][5708] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84d587568ba ContainerID="fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" Namespace="calico-system" Pod="calico-kube-controllers-74d7f4cf98-684md" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--74d7f4cf98--684md-eth0" May 10 02:17:25.045539 env[1300]: 2025-05-10 02:17:25.015 [INFO][5708] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" Namespace="calico-system" Pod="calico-kube-controllers-74d7f4cf98-684md" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--74d7f4cf98--684md-eth0" May 10 02:17:25.045539 env[1300]: 2025-05-10 02:17:25.016 [INFO][5708] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" Namespace="calico-system" Pod="calico-kube-controllers-74d7f4cf98-684md" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--74d7f4cf98--684md-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--74d7f4cf98--684md-eth0", GenerateName:"calico-kube-controllers-74d7f4cf98-", Namespace:"calico-system", SelfLink:"", UID:"f1a040b5-a640-437f-a564-ee0f86fc8621", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74d7f4cf98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab", Pod:"calico-kube-controllers-74d7f4cf98-684md", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali84d587568ba", MAC:"1e:2d:7a:a4:96:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:25.045539 env[1300]: 2025-05-10 02:17:25.037 [INFO][5708] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab" Namespace="calico-system" Pod="calico-kube-controllers-74d7f4cf98-684md" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--74d7f4cf98--684md-eth0" May 10 02:17:25.087593 env[1300]: time="2025-05-10T02:17:25.087472005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 02:17:25.087949 env[1300]: time="2025-05-10T02:17:25.087551600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 02:17:25.087949 env[1300]: time="2025-05-10T02:17:25.087593919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 02:17:25.088409 env[1300]: time="2025-05-10T02:17:25.088290277Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab pid=5748 runtime=io.containerd.runc.v2 May 10 02:17:25.227317 env[1300]: time="2025-05-10T02:17:25.227148891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d7f4cf98-684md,Uid:f1a040b5-a640-437f-a564-ee0f86fc8621,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab\"" May 10 02:17:25.320296 env[1300]: time="2025-05-10T02:17:25.320054287Z" level=info msg="CreateContainer within sandbox \"fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 10 02:17:25.346611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4079607466.mount: Deactivated successfully. May 10 02:17:25.354391 env[1300]: time="2025-05-10T02:17:25.354277686Z" level=info msg="CreateContainer within sandbox \"fd06f4a9be76b24def652d596fc378062ffae1163cd6a8108f21583a37c270ab\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"78ab25bd2f3c3e9c5bb5be43342c2a35492d3ecca6c50a35da52b2948ee55bc9\"" May 10 02:17:25.356607 env[1300]: time="2025-05-10T02:17:25.356542618Z" level=info msg="StartContainer for \"78ab25bd2f3c3e9c5bb5be43342c2a35492d3ecca6c50a35da52b2948ee55bc9\"" May 10 02:17:25.495798 env[1300]: time="2025-05-10T02:17:25.495732430Z" level=info msg="StartContainer for \"78ab25bd2f3c3e9c5bb5be43342c2a35492d3ecca6c50a35da52b2948ee55bc9\" returns successfully" May 10 02:17:25.640000 audit[5876]: AVC avc: denied { write } for pid=5876 comm="tee" name="fd" dev="proc" ino=37838 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:17:25.654712 kernel: kauditd_printk_skb: 19 callbacks suppressed May 10 02:17:25.654918 kernel: audit: type=1400 audit(1746843445.640:443): avc: denied { write } for pid=5876 comm="tee" name="fd" dev="proc" ino=37838 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:17:25.640000 audit[5876]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffed5551a06 a2=241 a3=1b6 items=1 ppid=5842 pid=5876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:25.680765 kernel: audit: type=1300 audit(1746843445.640:443): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffed5551a06 a2=241 a3=1b6 items=1 ppid=5842 pid=5876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:25.680873 kernel: audit: type=1307 audit(1746843445.640:443): cwd="/etc/service/enabled/confd/log" May 10 02:17:25.680935 kernel: audit: type=1302 audit(1746843445.640:443): item=0 name="/dev/fd/63" inode=37829 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:17:25.680995 kernel: audit: type=1327 audit(1746843445.640:443): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 10 02:17:25.640000 audit: CWD cwd="/etc/service/enabled/confd/log" May 10 02:17:25.640000 audit: PATH item=0 name="/dev/fd/63" inode=37829 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:17:25.640000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 10 02:17:25.692000 audit[5888]: AVC avc: denied { write } for pid=5888 comm="tee" name="fd" dev="proc" ino=37850 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:17:25.699660 kernel: audit: type=1400 audit(1746843445.692:444): avc: denied { write } for pid=5888 comm="tee" name="fd" dev="proc" ino=37850 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:17:25.692000 audit[5888]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff39ea5a07 a2=241 a3=1b6 items=1 ppid=5852 pid=5888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:25.707971 kernel: audit: type=1300 audit(1746843445.692:444): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff39ea5a07 a2=241 a3=1b6 items=1 ppid=5852 pid=5888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:25.692000 audit: CWD cwd="/etc/service/enabled/bird/log" May 10 02:17:25.711736 kernel: audit: type=1307 audit(1746843445.692:444): cwd="/etc/service/enabled/bird/log" May 10 02:17:25.717406 kernel: audit: type=1302 audit(1746843445.692:444): item=0 name="/dev/fd/63" inode=37847 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:17:25.692000 audit: PATH item=0 name="/dev/fd/63" inode=37847 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:17:25.692000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 10 02:17:25.727519 kernel: audit: type=1327 audit(1746843445.692:444): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 10 02:17:25.731138 systemd[1]: Started sshd@10-10.230.33.70:22-139.178.68.195:55778.service. May 10 02:17:25.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.33.70:22-139.178.68.195:55778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:25.775000 audit[5879]: AVC avc: denied { write } for pid=5879 comm="tee" name="fd" dev="proc" ino=37873 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:17:25.775000 audit[5879]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdc6fed9f7 a2=241 a3=1b6 items=1 ppid=5840 pid=5879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:25.775000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 10 02:17:25.775000 audit: PATH item=0 name="/dev/fd/63" inode=36834 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:17:25.775000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 10 02:17:25.807000 audit[5896]: AVC avc: denied { write } for pid=5896 comm="tee" name="fd" dev="proc" ino=37888 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:17:25.809000 audit[5874]: AVC avc: denied { write } for pid=5874 comm="tee" name="fd" dev="proc" ino=37889 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:17:25.815000 audit[5893]: AVC avc: denied { write } for pid=5893 comm="tee" name="fd" dev="proc" ino=37892 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:17:25.809000 audit[5874]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffbbc1ea06 a2=241 a3=1b6 items=1 ppid=5841 pid=5874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:25.809000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 10 02:17:25.809000 audit: PATH item=0 name="/dev/fd/63" inode=36833 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:17:25.809000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 10 02:17:25.820000 audit[5898]: AVC avc: denied { write } for pid=5898 comm="tee" name="fd" dev="proc" ino=37896 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 10 02:17:25.815000 audit[5893]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcc9c34a08 a2=241 a3=1b6 items=1 ppid=5856 pid=5893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:25.815000 audit: CWD cwd="/etc/service/enabled/cni/log" May 10 02:17:25.815000 audit: PATH item=0 name="/dev/fd/63" inode=37862 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:17:25.815000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 10 02:17:25.807000 audit[5896]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff8c7739f6 a2=241 a3=1b6 items=1 ppid=5844 pid=5896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:25.820000 audit[5898]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffa06e8a06 a2=241 a3=1b6 items=1 ppid=5854 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:25.807000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 10 02:17:25.820000 audit: CWD cwd="/etc/service/enabled/felix/log" May 10 02:17:25.807000 audit: PATH item=0 name="/dev/fd/63" inode=37869 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:17:25.820000 audit: PATH item=0 name="/dev/fd/63" inode=37870 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 02:17:25.807000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 10 02:17:25.820000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 10 02:17:26.696000 audit[5891]: USER_ACCT pid=5891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:26.699000 audit[5891]: CRED_ACQ pid=5891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:26.699000 audit[5891]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd38d79300 a2=3 a3=0 items=0 ppid=1 pid=5891 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:26.699000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:17:26.706706 sshd[5891]: Accepted publickey for core from 139.178.68.195 port 55778 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:17:26.704948 sshd[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:17:26.738294 systemd-logind[1288]: New session 11 of user core. May 10 02:17:26.739936 systemd[1]: Started session-11.scope. May 10 02:17:26.755000 audit[5891]: USER_START pid=5891 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:26.758000 audit[5935]: CRED_ACQ pid=5935 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:26.896862 systemd-networkd[1074]: cali84d587568ba: Gained IPv6LL May 10 02:17:27.562927 systemd[1]: run-containerd-runc-k8s.io-78ab25bd2f3c3e9c5bb5be43342c2a35492d3ecca6c50a35da52b2948ee55bc9-runc.F4NC29.mount: Deactivated successfully. May 10 02:17:28.107755 sshd[5891]: pam_unix(sshd:session): session closed for user core May 10 02:17:28.109000 audit[5891]: USER_END pid=5891 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:28.110000 audit[5891]: CRED_DISP pid=5891 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:28.113655 systemd[1]: sshd@10-10.230.33.70:22-139.178.68.195:55778.service: Deactivated successfully. May 10 02:17:28.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.33.70:22-139.178.68.195:55778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:28.116106 systemd[1]: session-11.scope: Deactivated successfully. May 10 02:17:28.116953 systemd-logind[1288]: Session 11 logged out. Waiting for processes to exit. May 10 02:17:28.119508 systemd-logind[1288]: Removed session 11. May 10 02:17:28.540422 systemd[1]: run-containerd-runc-k8s.io-78ab25bd2f3c3e9c5bb5be43342c2a35492d3ecca6c50a35da52b2948ee55bc9-runc.Pej7c7.mount: Deactivated successfully. May 10 02:17:31.513959 kubelet[2271]: I0510 02:17:31.508721 2271 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 02:17:31.651832 kubelet[2271]: I0510 02:17:31.651711 2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74d7f4cf98-684md" podStartSLOduration=12.650869193 podStartE2EDuration="12.650869193s" podCreationTimestamp="2025-05-10 02:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 02:17:26.566447209 +0000 UTC m=+86.349628743" watchObservedRunningTime="2025-05-10 02:17:31.650869193 +0000 UTC m=+91.434050739" May 10 02:17:31.697203 env[1300]: time="2025-05-10T02:17:31.695950827Z" level=info msg="StopContainer for \"bb8a5bb84c6523eb82d684c52018fb61fd4509c76271b7546e5c06e3307364d5\" with timeout 30 (s)" May 10 02:17:31.697203 env[1300]: time="2025-05-10T02:17:31.696648653Z" level=info msg="Stop container \"bb8a5bb84c6523eb82d684c52018fb61fd4509c76271b7546e5c06e3307364d5\" with signal terminated" May 10 02:17:31.783368 kubelet[2271]: I0510 02:17:31.783216 2271 topology_manager.go:215] "Topology Admit Handler" podUID="b7a89b96-43d1-4ac9-9c57-3bc92b297490" podNamespace="calico-apiserver" podName="calico-apiserver-67d5855849-drvpp" May 10 02:17:31.815879 kernel: kauditd_printk_skb: 36 callbacks suppressed May 10 02:17:31.817829 kernel: audit: type=1325 audit(1746843451.804:459): table=filter:128 family=2 entries=10 op=nft_register_rule pid=6091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:31.804000 audit[6091]: NETFILTER_CFG table=filter:128 family=2 entries=10 op=nft_register_rule pid=6091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:31.839457 kernel: audit: type=1300 audit(1746843451.804:459): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff06634540 a2=0 a3=7fff0663452c items=0 ppid=2455 pid=6091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:31.804000 audit[6091]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff06634540 a2=0 a3=7fff0663452c items=0 ppid=2455 pid=6091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:31.844815 env[1300]: time="2025-05-10T02:17:31.844676737Z" level=info msg="shim disconnected" id=bb8a5bb84c6523eb82d684c52018fb61fd4509c76271b7546e5c06e3307364d5 May 10 02:17:31.845466 env[1300]: time="2025-05-10T02:17:31.845057817Z" level=warning msg="cleaning up after shim disconnected" id=bb8a5bb84c6523eb82d684c52018fb61fd4509c76271b7546e5c06e3307364d5 namespace=k8s.io May 10 02:17:31.845466 env[1300]: time="2025-05-10T02:17:31.845083013Z" level=info msg="cleaning up dead shim" May 10 02:17:31.804000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:31.850673 kernel: audit: type=1327 audit(1746843451.804:459): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:31.847808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb8a5bb84c6523eb82d684c52018fb61fd4509c76271b7546e5c06e3307364d5-rootfs.mount: Deactivated successfully. May 10 02:17:31.838000 audit[6091]: NETFILTER_CFG table=nat:129 family=2 entries=38 op=nft_register_chain pid=6091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:31.860765 kernel: audit: type=1325 audit(1746843451.838:460): table=nat:129 family=2 entries=38 op=nft_register_chain pid=6091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:31.838000 audit[6091]: SYSCALL arch=c000003e syscall=46 success=yes exit=12356 a0=3 a1=7fff06634540 a2=0 a3=7fff0663452c items=0 ppid=2455 pid=6091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:31.869676 kernel: audit: type=1300 audit(1746843451.838:460): arch=c000003e syscall=46 success=yes exit=12356 a0=3 a1=7fff06634540 a2=0 a3=7fff0663452c items=0 ppid=2455 pid=6091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:31.838000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:31.874659 kernel: audit: type=1327 audit(1746843451.838:460): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:31.914954 kernel: audit: type=1325 audit(1746843451.886:461): table=filter:130 family=2 entries=10 op=nft_register_rule pid=6112 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:31.915290 kernel: audit: type=1300 audit(1746843451.886:461): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd2d10e570 a2=0 a3=7ffd2d10e55c items=0 ppid=2455 pid=6112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:31.886000 audit[6112]: NETFILTER_CFG table=filter:130 family=2 entries=10 op=nft_register_rule pid=6112 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:31.886000 audit[6112]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd2d10e570 a2=0 a3=7ffd2d10e55c items=0 ppid=2455 pid=6112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:31.886000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:31.924811 kernel: audit: type=1327 audit(1746843451.886:461): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:31.924930 kernel: audit: type=1325 audit(1746843451.918:462): table=nat:131 family=2 entries=38 op=nft_unregister_chain pid=6112 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:31.918000 audit[6112]: NETFILTER_CFG table=nat:131 family=2 entries=38 op=nft_unregister_chain pid=6112 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:31.926296 env[1300]: time="2025-05-10T02:17:31.926225672Z" level=warning msg="cleanup warnings time=\"2025-05-10T02:17:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6099 runtime=io.containerd.runc.v2\n" May 10 02:17:31.918000 audit[6112]: SYSCALL arch=c000003e syscall=46 success=yes exit=10596 a0=3 a1=7ffd2d10e570 a2=0 a3=7ffd2d10e55c items=0 ppid=2455 pid=6112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:31.918000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:31.942054 env[1300]: time="2025-05-10T02:17:31.941999085Z" level=info msg="StopContainer for \"bb8a5bb84c6523eb82d684c52018fb61fd4509c76271b7546e5c06e3307364d5\" returns successfully" May 10 02:17:31.943793 env[1300]: time="2025-05-10T02:17:31.943757231Z" level=info msg="StopPodSandbox for \"28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5\"" May 10 02:17:31.944020 env[1300]: time="2025-05-10T02:17:31.943972532Z" level=info msg="Container to stop \"bb8a5bb84c6523eb82d684c52018fb61fd4509c76271b7546e5c06e3307364d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 02:17:31.948736 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5-shm.mount: Deactivated successfully. May 10 02:17:31.956090 kubelet[2271]: I0510 02:17:31.955672 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b7a89b96-43d1-4ac9-9c57-3bc92b297490-calico-apiserver-certs\") pod \"calico-apiserver-67d5855849-drvpp\" (UID: \"b7a89b96-43d1-4ac9-9c57-3bc92b297490\") " pod="calico-apiserver/calico-apiserver-67d5855849-drvpp" May 10 02:17:31.956090 kubelet[2271]: I0510 02:17:31.955900 2271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp57f\" (UniqueName: \"kubernetes.io/projected/b7a89b96-43d1-4ac9-9c57-3bc92b297490-kube-api-access-jp57f\") pod \"calico-apiserver-67d5855849-drvpp\" (UID: \"b7a89b96-43d1-4ac9-9c57-3bc92b297490\") " pod="calico-apiserver/calico-apiserver-67d5855849-drvpp" May 10 02:17:32.027904 env[1300]: time="2025-05-10T02:17:32.026500056Z" level=info msg="shim disconnected" id=28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5 May 10 02:17:32.028051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5-rootfs.mount: Deactivated successfully. May 10 02:17:32.031159 env[1300]: time="2025-05-10T02:17:32.031069281Z" level=warning msg="cleaning up after shim disconnected" id=28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5 namespace=k8s.io May 10 02:17:32.031274 env[1300]: time="2025-05-10T02:17:32.031159872Z" level=info msg="cleaning up dead shim" May 10 02:17:32.046775 env[1300]: time="2025-05-10T02:17:32.046713961Z" level=warning msg="cleanup warnings time=\"2025-05-10T02:17:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6141 runtime=io.containerd.runc.v2\n" May 10 02:17:32.103341 env[1300]: time="2025-05-10T02:17:32.103267212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d5855849-drvpp,Uid:b7a89b96-43d1-4ac9-9c57-3bc92b297490,Namespace:calico-apiserver,Attempt:0,}" May 10 02:17:32.302128 systemd-networkd[1074]: cali6229da7e49a: Link DOWN May 10 02:17:32.302142 systemd-networkd[1074]: cali6229da7e49a: Lost carrier May 10 02:17:32.531710 kubelet[2271]: I0510 02:17:32.531650 2271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" May 10 02:17:32.581254 env[1300]: 2025-05-10 02:17:32.288 [INFO][6170] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" May 10 02:17:32.581254 env[1300]: 2025-05-10 02:17:32.289 [INFO][6170] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" iface="eth0" netns="/var/run/netns/cni-88e9df4a-ec7f-7888-8ba5-6593395ebc36" May 10 02:17:32.581254 env[1300]: 2025-05-10 02:17:32.292 [INFO][6170] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" iface="eth0" netns="/var/run/netns/cni-88e9df4a-ec7f-7888-8ba5-6593395ebc36" May 10 02:17:32.581254 env[1300]: 2025-05-10 02:17:32.333 [INFO][6170] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" after=44.621807ms iface="eth0" netns="/var/run/netns/cni-88e9df4a-ec7f-7888-8ba5-6593395ebc36" May 10 02:17:32.581254 env[1300]: 2025-05-10 02:17:32.334 [INFO][6170] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" May 10 02:17:32.581254 env[1300]: 2025-05-10 02:17:32.334 [INFO][6170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" May 10 02:17:32.581254 env[1300]: 2025-05-10 02:17:32.482 [INFO][6204] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" HandleID="k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:17:32.581254 env[1300]: 2025-05-10 02:17:32.484 [INFO][6204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:32.581254 env[1300]: 2025-05-10 02:17:32.485 [INFO][6204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:32.581254 env[1300]: 2025-05-10 02:17:32.563 [INFO][6204] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" HandleID="k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:17:32.581254 env[1300]: 2025-05-10 02:17:32.563 [INFO][6204] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" HandleID="k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:17:32.581254 env[1300]: 2025-05-10 02:17:32.566 [INFO][6204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:32.581254 env[1300]: 2025-05-10 02:17:32.574 [INFO][6170] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" May 10 02:17:32.592528 env[1300]: time="2025-05-10T02:17:32.581511499Z" level=info msg="TearDown network for sandbox \"28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5\" successfully" May 10 02:17:32.592528 env[1300]: time="2025-05-10T02:17:32.581578344Z" level=info msg="StopPodSandbox for \"28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5\" returns successfully" May 10 02:17:32.592528 env[1300]: time="2025-05-10T02:17:32.583072045Z" level=info msg="StopPodSandbox for \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\"" May 10 02:17:32.628397 systemd-networkd[1074]: cali6d3639beb1f: Link UP May 10 02:17:32.636361 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6d3639beb1f: link becomes ready May 10 02:17:32.636690 systemd-networkd[1074]: cali6d3639beb1f: Gained carrier May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.267 [INFO][6177] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--drvpp-eth0 calico-apiserver-67d5855849- calico-apiserver b7a89b96-43d1-4ac9-9c57-3bc92b297490 1207 0 2025-05-10 02:17:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67d5855849 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-it8yl.gb1.brightbox.com calico-apiserver-67d5855849-drvpp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6d3639beb1f [] []}} ContainerID="1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" Namespace="calico-apiserver" Pod="calico-apiserver-67d5855849-drvpp" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--drvpp-" May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.268 [INFO][6177] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" Namespace="calico-apiserver" Pod="calico-apiserver-67d5855849-drvpp" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--drvpp-eth0" May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.491 [INFO][6200] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" HandleID="k8s-pod-network.1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--drvpp-eth0" May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.512 [INFO][6200] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" HandleID="k8s-pod-network.1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--drvpp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050980), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-it8yl.gb1.brightbox.com", "pod":"calico-apiserver-67d5855849-drvpp", "timestamp":"2025-05-10 02:17:32.491603424 +0000 UTC"}, Hostname:"srv-it8yl.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.512 [INFO][6200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.568 [INFO][6200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.568 [INFO][6200] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-it8yl.gb1.brightbox.com' May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.573 [INFO][6200] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.583 [INFO][6200] ipam/ipam.go 372: Looking up existing affinities for host host="srv-it8yl.gb1.brightbox.com" May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.591 [INFO][6200] ipam/ipam.go 489: Trying affinity for 192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.594 [INFO][6200] ipam/ipam.go 155: Attempting to load block cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.598 [INFO][6200] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.123.128/26 host="srv-it8yl.gb1.brightbox.com" May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.598 [INFO][6200] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.123.128/26 handle="k8s-pod-network.1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.602 [INFO][6200] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351 May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.609 [INFO][6200] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.123.128/26 handle="k8s-pod-network.1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.617 [INFO][6200] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.123.137/26] block=192.168.123.128/26 handle="k8s-pod-network.1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.617 [INFO][6200] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.123.137/26] handle="k8s-pod-network.1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" host="srv-it8yl.gb1.brightbox.com" May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.618 [INFO][6200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:32.662154 env[1300]: 2025-05-10 02:17:32.618 [INFO][6200] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.137/26] IPv6=[] ContainerID="1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" HandleID="k8s-pod-network.1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--drvpp-eth0" May 10 02:17:32.664896 env[1300]: 2025-05-10 02:17:32.622 [INFO][6177] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" Namespace="calico-apiserver" Pod="calico-apiserver-67d5855849-drvpp" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--drvpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--drvpp-eth0", GenerateName:"calico-apiserver-67d5855849-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7a89b96-43d1-4ac9-9c57-3bc92b297490", ResourceVersion:"1207", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 17, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d5855849", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-67d5855849-drvpp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d3639beb1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:32.664896 env[1300]: 2025-05-10 02:17:32.622 [INFO][6177] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.123.137/32] ContainerID="1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" Namespace="calico-apiserver" Pod="calico-apiserver-67d5855849-drvpp" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--drvpp-eth0" May 10 02:17:32.664896 env[1300]: 2025-05-10 02:17:32.622 [INFO][6177] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d3639beb1f ContainerID="1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" Namespace="calico-apiserver" Pod="calico-apiserver-67d5855849-drvpp" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--drvpp-eth0" May 10 02:17:32.664896 env[1300]: 2025-05-10 02:17:32.639 [INFO][6177] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" Namespace="calico-apiserver" Pod="calico-apiserver-67d5855849-drvpp" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--drvpp-eth0" May 10 02:17:32.664896 env[1300]: 2025-05-10 02:17:32.639 [INFO][6177] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" Namespace="calico-apiserver" Pod="calico-apiserver-67d5855849-drvpp" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--drvpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--drvpp-eth0", GenerateName:"calico-apiserver-67d5855849-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7a89b96-43d1-4ac9-9c57-3bc92b297490", ResourceVersion:"1207", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 17, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d5855849", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351", Pod:"calico-apiserver-67d5855849-drvpp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d3639beb1f", MAC:"0e:4f:61:46:a2:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:32.664896 env[1300]: 2025-05-10 02:17:32.654 [INFO][6177] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351" Namespace="calico-apiserver" Pod="calico-apiserver-67d5855849-drvpp" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--67d5855849--drvpp-eth0" May 10 02:17:32.714053 env[1300]: time="2025-05-10T02:17:32.713600139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 02:17:32.714053 env[1300]: time="2025-05-10T02:17:32.713722613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 02:17:32.714053 env[1300]: time="2025-05-10T02:17:32.713785071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 02:17:32.716132 env[1300]: time="2025-05-10T02:17:32.714087445Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351 pid=6255 runtime=io.containerd.runc.v2 May 10 02:17:32.832385 env[1300]: 2025-05-10 02:17:32.722 [WARNING][6230] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0", GenerateName:"calico-apiserver-59c6df465c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3bda719f-2d0c-40dd-8013-db678548720f", ResourceVersion:"1210", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6df465c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5", Pod:"calico-apiserver-59c6df465c-t7qd5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6229da7e49a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:32.832385 env[1300]: 2025-05-10 02:17:32.722 [INFO][6230] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:17:32.832385 env[1300]: 2025-05-10 02:17:32.722 [INFO][6230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" iface="eth0" netns="" May 10 02:17:32.832385 env[1300]: 2025-05-10 02:17:32.722 [INFO][6230] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:17:32.832385 env[1300]: 2025-05-10 02:17:32.722 [INFO][6230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:17:32.832385 env[1300]: 2025-05-10 02:17:32.802 [INFO][6269] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" HandleID="k8s-pod-network.a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:17:32.832385 env[1300]: 2025-05-10 02:17:32.806 [INFO][6269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:32.832385 env[1300]: 2025-05-10 02:17:32.806 [INFO][6269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:32.832385 env[1300]: 2025-05-10 02:17:32.822 [WARNING][6269] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" HandleID="k8s-pod-network.a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:17:32.832385 env[1300]: 2025-05-10 02:17:32.823 [INFO][6269] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" HandleID="k8s-pod-network.a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:17:32.832385 env[1300]: 2025-05-10 02:17:32.825 [INFO][6269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:32.832385 env[1300]: 2025-05-10 02:17:32.828 [INFO][6230] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:17:32.832385 env[1300]: time="2025-05-10T02:17:32.832336583Z" level=info msg="TearDown network for sandbox \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\" successfully" May 10 02:17:32.835605 env[1300]: time="2025-05-10T02:17:32.832390859Z" level=info msg="StopPodSandbox for \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\" returns successfully" May 10 02:17:32.835605 env[1300]: time="2025-05-10T02:17:32.833402177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d5855849-drvpp,Uid:b7a89b96-43d1-4ac9-9c57-3bc92b297490,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351\"" May 10 02:17:32.842606 systemd[1]: run-netns-cni\x2d88e9df4a\x2dec7f\x2d7888\x2d8ba5\x2d6593395ebc36.mount: Deactivated successfully. May 10 02:17:32.861857 env[1300]: time="2025-05-10T02:17:32.861787679Z" level=info msg="CreateContainer within sandbox \"1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 02:17:32.892028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2164289446.mount: Deactivated successfully. May 10 02:17:32.891000 audit[6298]: NETFILTER_CFG table=filter:132 family=2 entries=10 op=nft_register_rule pid=6298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:32.891000 audit[6298]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe25c1d290 a2=0 a3=7ffe25c1d27c items=0 ppid=2455 pid=6298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:32.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:32.898446 env[1300]: time="2025-05-10T02:17:32.893810693Z" level=info msg="CreateContainer within sandbox \"1542353bb0d280d07dd727dc6c480f852946b11f6d72c25e9ec1182a7619e351\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d99b64be908b5ba7d316f9683d24f093cd1147e912f144d2e7f0bebf39efd573\"" May 10 02:17:32.899030 env[1300]: time="2025-05-10T02:17:32.898993008Z" level=info msg="StartContainer for \"d99b64be908b5ba7d316f9683d24f093cd1147e912f144d2e7f0bebf39efd573\"" May 10 02:17:32.897000 audit[6298]: NETFILTER_CFG table=nat:133 family=2 entries=34 op=nft_register_rule pid=6298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:32.897000 audit[6298]: SYSCALL arch=c000003e syscall=46 success=yes exit=10468 a0=3 a1=7ffe25c1d290 a2=0 a3=7ffe25c1d27c items=0 ppid=2455 pid=6298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:32.897000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:32.968260 kubelet[2271]: I0510 02:17:32.968199 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2smzl\" (UniqueName: \"kubernetes.io/projected/3bda719f-2d0c-40dd-8013-db678548720f-kube-api-access-2smzl\") pod \"3bda719f-2d0c-40dd-8013-db678548720f\" (UID: \"3bda719f-2d0c-40dd-8013-db678548720f\") " May 10 02:17:32.968741 kubelet[2271]: I0510 02:17:32.968283 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3bda719f-2d0c-40dd-8013-db678548720f-calico-apiserver-certs\") pod \"3bda719f-2d0c-40dd-8013-db678548720f\" (UID: \"3bda719f-2d0c-40dd-8013-db678548720f\") " May 10 02:17:32.990722 kubelet[2271]: I0510 02:17:32.983937 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bda719f-2d0c-40dd-8013-db678548720f-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "3bda719f-2d0c-40dd-8013-db678548720f" (UID: "3bda719f-2d0c-40dd-8013-db678548720f"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 02:17:32.990722 kubelet[2271]: I0510 02:17:32.980744 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bda719f-2d0c-40dd-8013-db678548720f-kube-api-access-2smzl" (OuterVolumeSpecName: "kube-api-access-2smzl") pod "3bda719f-2d0c-40dd-8013-db678548720f" (UID: "3bda719f-2d0c-40dd-8013-db678548720f"). InnerVolumeSpecName "kube-api-access-2smzl". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 02:17:33.059057 env[1300]: time="2025-05-10T02:17:33.058988417Z" level=info msg="StartContainer for \"d99b64be908b5ba7d316f9683d24f093cd1147e912f144d2e7f0bebf39efd573\" returns successfully" May 10 02:17:33.069440 kubelet[2271]: I0510 02:17:33.069387 2271 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2smzl\" (UniqueName: \"kubernetes.io/projected/3bda719f-2d0c-40dd-8013-db678548720f-kube-api-access-2smzl\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:33.069694 kubelet[2271]: I0510 02:17:33.069656 2271 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3bda719f-2d0c-40dd-8013-db678548720f-calico-apiserver-certs\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:33.253069 systemd[1]: Started sshd@11-10.230.33.70:22-139.178.68.195:55782.service. May 10 02:17:33.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.33.70:22-139.178.68.195:55782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:33.582919 kubelet[2271]: I0510 02:17:33.582849 2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67d5855849-drvpp" podStartSLOduration=2.582794762 podStartE2EDuration="2.582794762s" podCreationTimestamp="2025-05-10 02:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 02:17:33.555257063 +0000 UTC m=+93.338438586" watchObservedRunningTime="2025-05-10 02:17:33.582794762 +0000 UTC m=+93.365976285" May 10 02:17:33.841015 systemd[1]: run-containerd-runc-k8s.io-d99b64be908b5ba7d316f9683d24f093cd1147e912f144d2e7f0bebf39efd573-runc.0rvuIN.mount: Deactivated successfully. May 10 02:17:33.841898 systemd[1]: var-lib-kubelet-pods-3bda719f\x2d2d0c\x2d40dd\x2d8013\x2ddb678548720f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2smzl.mount: Deactivated successfully. May 10 02:17:33.842336 systemd[1]: var-lib-kubelet-pods-3bda719f\x2d2d0c\x2d40dd\x2d8013\x2ddb678548720f-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 10 02:17:33.916000 audit[6357]: NETFILTER_CFG table=filter:134 family=2 entries=10 op=nft_register_rule pid=6357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:33.916000 audit[6357]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdaf91f3d0 a2=0 a3=7ffdaf91f3bc items=0 ppid=2455 pid=6357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:33.916000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:33.921000 audit[6357]: NETFILTER_CFG table=nat:135 family=2 entries=34 op=nft_register_rule pid=6357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:33.921000 audit[6357]: SYSCALL arch=c000003e syscall=46 success=yes exit=10468 a0=3 a1=7ffdaf91f3d0 a2=0 a3=7ffdaf91f3bc items=0 ppid=2455 pid=6357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:33.921000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:34.175199 systemd-networkd[1074]: cali6d3639beb1f: Gained IPv6LL May 10 02:17:34.208000 audit[6345]: USER_ACCT pid=6345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:34.211000 audit[6345]: CRED_ACQ pid=6345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:34.211000 audit[6345]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecbf61ab0 a2=3 a3=0 items=0 ppid=1 pid=6345 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:34.211000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:17:34.214844 sshd[6345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:17:34.220621 sshd[6345]: Accepted publickey for core from 139.178.68.195 port 55782 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:17:34.228191 systemd-logind[1288]: New session 12 of user core. May 10 02:17:34.229528 systemd[1]: Started session-12.scope. May 10 02:17:34.249000 audit[6345]: USER_START pid=6345 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:34.251000 audit[6368]: CRED_ACQ pid=6368 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:34.524989 kubelet[2271]: I0510 02:17:34.524772 2271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bda719f-2d0c-40dd-8013-db678548720f" path="/var/lib/kubelet/pods/3bda719f-2d0c-40dd-8013-db678548720f/volumes" May 10 02:17:35.246654 sshd[6345]: pam_unix(sshd:session): session closed for user core May 10 02:17:35.247000 audit[6345]: USER_END pid=6345 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:35.247000 audit[6345]: CRED_DISP pid=6345 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:35.251742 systemd[1]: sshd@11-10.230.33.70:22-139.178.68.195:55782.service: Deactivated successfully. May 10 02:17:35.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.33.70:22-139.178.68.195:55782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:35.253743 systemd[1]: session-12.scope: Deactivated successfully. May 10 02:17:35.253849 systemd-logind[1288]: Session 12 logged out. Waiting for processes to exit. May 10 02:17:35.255728 systemd-logind[1288]: Removed session 12. May 10 02:17:35.541326 kubelet[2271]: I0510 02:17:35.541176 2271 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 02:17:38.391700 kernel: kauditd_printk_skb: 25 callbacks suppressed May 10 02:17:38.392160 kernel: audit: type=1325 audit(1746843458.385:476): table=filter:136 family=2 entries=9 op=nft_register_rule pid=6459 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:38.385000 audit[6459]: NETFILTER_CFG table=filter:136 family=2 entries=9 op=nft_register_rule pid=6459 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:38.385000 audit[6459]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffcaaff2930 a2=0 a3=7ffcaaff291c items=0 ppid=2455 pid=6459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:38.400514 kernel: audit: type=1300 audit(1746843458.385:476): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffcaaff2930 a2=0 a3=7ffcaaff291c items=0 ppid=2455 pid=6459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:38.385000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:38.404670 kernel: audit: type=1327 audit(1746843458.385:476): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:38.404000 audit[6459]: NETFILTER_CFG table=nat:137 family=2 entries=27 op=nft_register_chain pid=6459 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:38.404000 audit[6459]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffcaaff2930 a2=0 a3=7ffcaaff291c items=0 ppid=2455 pid=6459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:38.418871 kernel: audit: type=1325 audit(1746843458.404:477): table=nat:137 family=2 entries=27 op=nft_register_chain pid=6459 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:38.419299 kernel: audit: type=1300 audit(1746843458.404:477): arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffcaaff2930 a2=0 a3=7ffcaaff291c items=0 ppid=2455 pid=6459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:38.419592 kernel: audit: type=1327 audit(1746843458.404:477): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:38.404000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:39.110000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.139295 kernel: audit: type=1400 audit(1746843459.110:478): avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.139451 kernel: audit: type=1400 audit(1746843459.110:478): avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.139546 kernel: audit: type=1400 audit(1746843459.110:478): avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.139606 kernel: audit: type=1400 audit(1746843459.110:478): avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.110000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.110000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.110000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.110000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.110000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.110000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.110000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.110000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.110000 audit: BPF prog-id=29 op=LOAD May 10 02:17:39.110000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffd86e6f70 a2=98 a3=3 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.110000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.120000 audit: BPF prog-id=29 op=UNLOAD May 10 02:17:39.121000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.121000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.121000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.121000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.121000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.121000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.121000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.121000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.121000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.121000 audit: BPF prog-id=30 op=LOAD May 10 02:17:39.121000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd86e6d50 a2=74 a3=540051 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.121000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.125000 audit: BPF prog-id=30 op=UNLOAD May 10 02:17:39.125000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.125000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.125000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.125000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.125000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.125000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.125000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.125000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.125000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.125000 audit: BPF prog-id=31 op=LOAD May 10 02:17:39.125000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd86e6d80 a2=94 a3=2 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.125000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.131000 audit: BPF prog-id=31 op=UNLOAD May 10 02:17:39.314000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.314000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.314000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.314000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.314000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.314000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.314000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.314000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.314000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.314000 audit: BPF prog-id=32 op=LOAD May 10 02:17:39.314000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd86e6c40 a2=40 a3=1 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.314000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.316000 audit: BPF prog-id=32 op=UNLOAD May 10 02:17:39.316000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.316000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fffd86e6d10 a2=50 a3=7fffd86e6df0 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.316000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.329000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.329000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffd86e6c50 a2=28 a3=0 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.329000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.330000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.330000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffd86e6c80 a2=28 a3=0 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.330000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.330000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.330000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffd86e6b90 a2=28 a3=0 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.330000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.330000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.330000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffd86e6ca0 a2=28 a3=0 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.330000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.330000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.330000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffd86e6c80 a2=28 a3=0 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.330000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.330000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.330000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffd86e6c70 a2=28 a3=0 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.330000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.332000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.332000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffd86e6ca0 a2=28 a3=0 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.332000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.332000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.332000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffd86e6c80 a2=28 a3=0 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.332000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.333000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.333000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffd86e6ca0 a2=28 a3=0 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.333000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.333000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.333000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffd86e6c70 a2=28 a3=0 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.333000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.334000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.334000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffd86e6ce0 a2=28 a3=0 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.334000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.334000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.334000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffd86e6a90 a2=50 a3=1 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.334000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.335000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.335000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.335000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.335000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.335000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.335000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.335000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.335000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.335000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.335000 audit: BPF prog-id=33 op=LOAD May 10 02:17:39.335000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffd86e6a90 a2=94 a3=5 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.335000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.336000 audit: BPF prog-id=33 op=UNLOAD May 10 02:17:39.336000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.336000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffd86e6b40 a2=50 a3=1 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.336000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.336000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.336000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fffd86e6c60 a2=4 a3=38 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.336000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.336000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.336000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.336000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.336000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.336000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.336000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.336000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.336000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.336000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.336000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.336000 audit[6515]: AVC avc: denied { confidentiality } for pid=6515 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 10 02:17:39.336000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffd86e6cb0 a2=94 a3=6 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.336000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.338000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { confidentiality } for pid=6515 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 10 02:17:39.338000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffd86e6460 a2=94 a3=83 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.338000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { perfmon } for pid=6515 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { bpf } for pid=6515 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.338000 audit[6515]: AVC avc: denied { confidentiality } for pid=6515 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 10 02:17:39.338000 audit[6515]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffd86e6460 a2=94 a3=83 items=0 ppid=6493 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 10 02:17:39.366000 audit[6519]: AVC avc: denied { bpf } for pid=6519 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.366000 audit[6519]: AVC avc: denied { bpf } for pid=6519 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.366000 audit[6519]: AVC avc: denied { perfmon } for pid=6519 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.366000 audit[6519]: AVC avc: denied { perfmon } for pid=6519 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.366000 audit[6519]: AVC avc: denied { perfmon } for pid=6519 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.366000 audit[6519]: AVC avc: denied { perfmon } for pid=6519 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.366000 audit[6519]: AVC avc: denied { perfmon } for pid=6519 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.366000 audit[6519]: AVC avc: denied { bpf } for pid=6519 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.366000 audit[6519]: AVC avc: denied { bpf } for pid=6519 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.366000 audit: BPF prog-id=34 op=LOAD May 10 02:17:39.366000 audit[6519]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffddeb64410 a2=98 a3=1999999999999999 items=0 ppid=6493 pid=6519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.366000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 10 02:17:39.368000 audit: BPF prog-id=34 op=UNLOAD May 10 02:17:39.368000 audit[6519]: AVC avc: denied { bpf } for pid=6519 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.368000 audit[6519]: AVC avc: denied { bpf } for pid=6519 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.368000 audit[6519]: AVC avc: denied { perfmon } for pid=6519 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.368000 audit[6519]: AVC avc: denied { perfmon } for pid=6519 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.368000 audit[6519]: AVC avc: denied { perfmon } for pid=6519 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.368000 audit[6519]: AVC avc: denied { perfmon } for pid=6519 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.368000 audit[6519]: AVC avc: denied { perfmon } for pid=6519 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.368000 audit[6519]: AVC avc: denied { bpf } for pid=6519 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.368000 audit[6519]: AVC avc: denied { bpf } for pid=6519 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.368000 audit: BPF prog-id=35 op=LOAD May 10 02:17:39.368000 audit[6519]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffddeb642f0 a2=74 a3=ffff items=0 ppid=6493 pid=6519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.368000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 10 02:17:39.369000 audit: BPF prog-id=35 op=UNLOAD May 10 02:17:39.369000 audit[6519]: AVC avc: denied { bpf } for pid=6519 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.369000 audit[6519]: AVC avc: denied { bpf } for pid=6519 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.369000 audit[6519]: AVC avc: denied { perfmon } for pid=6519 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.369000 audit[6519]: AVC avc: denied { perfmon } for pid=6519 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.369000 audit[6519]: AVC avc: denied { perfmon } for pid=6519 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.369000 audit[6519]: AVC avc: denied { perfmon } for pid=6519 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.369000 audit[6519]: AVC avc: denied { perfmon } for pid=6519 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.369000 audit[6519]: AVC avc: denied { bpf } for pid=6519 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.369000 audit[6519]: AVC avc: denied { bpf } for pid=6519 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.369000 audit: BPF prog-id=36 op=LOAD May 10 02:17:39.369000 audit[6519]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffddeb64330 a2=40 a3=7ffddeb64510 items=0 ppid=6493 pid=6519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.369000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 10 02:17:39.375000 audit: BPF prog-id=36 op=UNLOAD May 10 02:17:39.510000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.510000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.510000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.510000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.510000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.510000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.510000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.510000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.510000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.510000 audit: BPF prog-id=37 op=LOAD May 10 02:17:39.510000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc5b68e840 a2=98 a3=100 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.510000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.510000 audit: BPF prog-id=37 op=UNLOAD May 10 02:17:39.511000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit: BPF prog-id=38 op=LOAD May 10 02:17:39.511000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc5b68e650 a2=74 a3=540051 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.511000 audit: BPF prog-id=38 op=UNLOAD May 10 02:17:39.511000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit: BPF prog-id=39 op=LOAD May 10 02:17:39.511000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc5b68e680 a2=94 a3=2 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.511000 audit: BPF prog-id=39 op=UNLOAD May 10 02:17:39.511000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc5b68e550 a2=28 a3=0 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc5b68e580 a2=28 a3=0 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc5b68e490 a2=28 a3=0 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc5b68e5a0 a2=28 a3=0 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc5b68e580 a2=28 a3=0 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc5b68e570 a2=28 a3=0 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.511000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.511000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc5b68e5a0 a2=28 a3=0 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.512000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.512000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc5b68e580 a2=28 a3=0 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.512000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.512000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.512000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc5b68e5a0 a2=28 a3=0 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.512000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.512000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.512000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc5b68e570 a2=28 a3=0 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.512000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.512000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.512000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc5b68e5e0 a2=28 a3=0 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.512000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.512000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.512000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.512000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.512000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.512000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.512000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.512000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.512000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.512000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.512000 audit: BPF prog-id=40 op=LOAD May 10 02:17:39.512000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc5b68e450 a2=40 a3=0 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.512000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.512000 audit: BPF prog-id=40 op=UNLOAD May 10 02:17:39.513000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffc5b68e440 a2=50 a3=2800 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.513000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffc5b68e440 a2=50 a3=2800 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.513000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit: BPF prog-id=41 op=LOAD May 10 02:17:39.513000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc5b68dc60 a2=94 a3=2 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.513000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.513000 audit: BPF prog-id=41 op=UNLOAD May 10 02:17:39.513000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { perfmon } for pid=6545 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit[6545]: AVC avc: denied { bpf } for pid=6545 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.513000 audit: BPF prog-id=42 op=LOAD May 10 02:17:39.513000 audit[6545]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc5b68dd60 a2=94 a3=30 items=0 ppid=6493 pid=6545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.513000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 10 02:17:39.518000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.518000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.518000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.518000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.518000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.518000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.518000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.518000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.518000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.518000 audit: BPF prog-id=43 op=LOAD May 10 02:17:39.518000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff212d15b0 a2=98 a3=0 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.518000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.518000 audit: BPF prog-id=43 op=UNLOAD May 10 02:17:39.519000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit: BPF prog-id=44 op=LOAD May 10 02:17:39.519000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff212d1390 a2=74 a3=540051 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.519000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.519000 audit: BPF prog-id=44 op=UNLOAD May 10 02:17:39.519000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.519000 audit: BPF prog-id=45 op=LOAD May 10 02:17:39.519000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff212d13c0 a2=94 a3=2 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.519000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.519000 audit: BPF prog-id=45 op=UNLOAD May 10 02:17:39.721000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.721000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.721000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.721000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.721000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.721000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.721000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.721000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.721000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.721000 audit: BPF prog-id=46 op=LOAD May 10 02:17:39.721000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff212d1280 a2=40 a3=1 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.721000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.728000 audit: BPF prog-id=46 op=UNLOAD May 10 02:17:39.728000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.728000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff212d1350 a2=50 a3=7fff212d1430 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.728000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.740000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.740000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff212d1290 a2=28 a3=0 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.740000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.741000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.741000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff212d12c0 a2=28 a3=0 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.741000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.741000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.741000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff212d11d0 a2=28 a3=0 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.741000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.742000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.742000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff212d12e0 a2=28 a3=0 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.742000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.742000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.742000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff212d12c0 a2=28 a3=0 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.742000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.743000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.743000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff212d12b0 a2=28 a3=0 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.743000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.743000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.743000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff212d12e0 a2=28 a3=0 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.743000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.743000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.743000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff212d12c0 a2=28 a3=0 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.743000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.744000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.744000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff212d12e0 a2=28 a3=0 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.744000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.744000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.744000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff212d12b0 a2=28 a3=0 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.744000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.745000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.745000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff212d1320 a2=28 a3=0 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.745000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.746000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.746000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff212d10d0 a2=50 a3=1 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.746000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.746000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.746000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.746000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.746000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.746000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.746000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.746000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.746000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.746000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.746000 audit: BPF prog-id=47 op=LOAD May 10 02:17:39.746000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff212d10d0 a2=94 a3=5 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.746000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.747000 audit: BPF prog-id=47 op=UNLOAD May 10 02:17:39.747000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.747000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff212d1180 a2=50 a3=1 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.747000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.748000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.748000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff212d12a0 a2=4 a3=38 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.748000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.748000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.748000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.748000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.748000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.748000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.748000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.748000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.748000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.748000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.748000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.748000 audit[6547]: AVC avc: denied { confidentiality } for pid=6547 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 10 02:17:39.748000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff212d12f0 a2=94 a3=6 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.748000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { confidentiality } for pid=6547 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 10 02:17:39.749000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff212d0aa0 a2=94 a3=83 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.749000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { perfmon } for pid=6547 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.749000 audit[6547]: AVC avc: denied { confidentiality } for pid=6547 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 10 02:17:39.749000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff212d0aa0 a2=94 a3=83 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.749000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.750000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.750000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff212d24e0 a2=10 a3=f1f00800 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.750000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.750000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.750000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff212d2380 a2=10 a3=3 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.750000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.751000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.751000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff212d2320 a2=10 a3=3 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.751000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.751000 audit[6547]: AVC avc: denied { bpf } for pid=6547 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 10 02:17:39.751000 audit[6547]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff212d2320 a2=10 a3=7 items=0 ppid=6493 pid=6547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.751000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 10 02:17:39.757000 audit: BPF prog-id=42 op=UNLOAD May 10 02:17:39.757000 audit[2128]: SYSCALL arch=c000003e syscall=281 success=yes exit=0 a0=4 a1=c00580776c a2=80 a3=1 items=0 ppid=1961 pid=2128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.757000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3233302E33332E3730002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 May 10 02:17:39.872000 audit[6594]: NETFILTER_CFG table=filter:138 family=2 entries=126 op=nft_register_chain pid=6594 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 10 02:17:39.872000 audit[6594]: SYSCALL arch=c000003e syscall=46 success=yes exit=44240 a0=3 a1=7ffc12c52ab0 a2=0 a3=7ffc12c52a9c items=0 ppid=6493 pid=6594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.872000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 10 02:17:39.875000 audit[6594]: NETFILTER_CFG table=filter:139 family=2 entries=4 op=nft_unregister_chain pid=6594 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 10 02:17:39.875000 audit[6594]: SYSCALL arch=c000003e syscall=46 success=yes exit=560 a0=3 a1=7ffc12c52ab0 a2=0 a3=5641e05c4000 items=0 ppid=6493 pid=6594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:39.875000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 10 02:17:40.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.33.70:22-139.178.68.195:49002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:40.396125 systemd[1]: Started sshd@12-10.230.33.70:22-139.178.68.195:49002.service. May 10 02:17:41.361000 audit[6596]: USER_ACCT pid=6596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:41.365307 sshd[6596]: Accepted publickey for core from 139.178.68.195 port 49002 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:17:41.364000 audit[6596]: CRED_ACQ pid=6596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:41.365000 audit[6596]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc43b93b0 a2=3 a3=0 items=0 ppid=1 pid=6596 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:41.365000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:17:41.369231 sshd[6596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:17:41.379535 systemd-logind[1288]: New session 13 of user core. May 10 02:17:41.380457 systemd[1]: Started session-13.scope. May 10 02:17:41.388000 audit[6596]: USER_START pid=6596 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:41.391000 audit[6600]: CRED_ACQ pid=6600 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:41.674000 audit[6602]: NETFILTER_CFG table=filter:140 family=2 entries=8 op=nft_register_rule pid=6602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:41.674000 audit[6602]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe499a01e0 a2=0 a3=7ffe499a01cc items=0 ppid=2455 pid=6602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:41.674000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:41.694000 audit[6602]: NETFILTER_CFG table=nat:141 family=2 entries=40 op=nft_register_chain pid=6602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:41.694000 audit[6602]: SYSCALL arch=c000003e syscall=46 success=yes exit=13124 a0=3 a1=7ffe499a01e0 a2=0 a3=7ffe499a01cc items=0 ppid=2455 pid=6602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:41.694000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:41.786000 audit[6604]: NETFILTER_CFG table=filter:142 family=2 entries=8 op=nft_register_rule pid=6604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:41.786000 audit[6604]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffcd16a7990 a2=0 a3=7ffcd16a797c items=0 ppid=2455 pid=6604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:41.786000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:41.789954 env[1300]: time="2025-05-10T02:17:41.789674251Z" level=info msg="StopContainer for \"c671a123216c2968f1584803d7b02d407eaf29c0db19eb524083ccb7448bfbc1\" with timeout 30 (s)" May 10 02:17:41.791000 audit[6604]: NETFILTER_CFG table=nat:143 family=2 entries=40 op=nft_unregister_chain pid=6604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:41.791000 audit[6604]: SYSCALL arch=c000003e syscall=46 success=yes exit=11364 a0=3 a1=7ffcd16a7990 a2=0 a3=7ffcd16a797c items=0 ppid=2455 pid=6604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:41.791000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:41.795471 env[1300]: time="2025-05-10T02:17:41.795430997Z" level=info msg="Stop container \"c671a123216c2968f1584803d7b02d407eaf29c0db19eb524083ccb7448bfbc1\" with signal terminated" May 10 02:17:41.891140 env[1300]: time="2025-05-10T02:17:41.888783066Z" level=info msg="shim disconnected" id=c671a123216c2968f1584803d7b02d407eaf29c0db19eb524083ccb7448bfbc1 May 10 02:17:41.891416 env[1300]: time="2025-05-10T02:17:41.891367449Z" level=warning msg="cleaning up after shim disconnected" id=c671a123216c2968f1584803d7b02d407eaf29c0db19eb524083ccb7448bfbc1 namespace=k8s.io May 10 02:17:41.891690 env[1300]: time="2025-05-10T02:17:41.891623018Z" level=info msg="cleaning up dead shim" May 10 02:17:41.893224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c671a123216c2968f1584803d7b02d407eaf29c0db19eb524083ccb7448bfbc1-rootfs.mount: Deactivated successfully. May 10 02:17:41.918115 env[1300]: time="2025-05-10T02:17:41.918048379Z" level=warning msg="cleanup warnings time=\"2025-05-10T02:17:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6628 runtime=io.containerd.runc.v2\n" May 10 02:17:41.937312 env[1300]: time="2025-05-10T02:17:41.936396395Z" level=info msg="StopContainer for \"c671a123216c2968f1584803d7b02d407eaf29c0db19eb524083ccb7448bfbc1\" returns successfully" May 10 02:17:41.938938 env[1300]: time="2025-05-10T02:17:41.938898885Z" level=info msg="StopPodSandbox for \"7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731\"" May 10 02:17:41.939079 env[1300]: time="2025-05-10T02:17:41.939035027Z" level=info msg="Container to stop \"c671a123216c2968f1584803d7b02d407eaf29c0db19eb524083ccb7448bfbc1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 02:17:41.943170 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731-shm.mount: Deactivated successfully. May 10 02:17:42.012166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731-rootfs.mount: Deactivated successfully. May 10 02:17:42.016603 env[1300]: time="2025-05-10T02:17:42.016397332Z" level=info msg="shim disconnected" id=7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731 May 10 02:17:42.016603 env[1300]: time="2025-05-10T02:17:42.016460795Z" level=warning msg="cleaning up after shim disconnected" id=7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731 namespace=k8s.io May 10 02:17:42.016603 env[1300]: time="2025-05-10T02:17:42.016478757Z" level=info msg="cleaning up dead shim" May 10 02:17:42.053237 env[1300]: time="2025-05-10T02:17:42.053167396Z" level=warning msg="cleanup warnings time=\"2025-05-10T02:17:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6665 runtime=io.containerd.runc.v2\n" May 10 02:17:42.411199 systemd-networkd[1074]: cali27bc54b3006: Link DOWN May 10 02:17:42.411210 systemd-networkd[1074]: cali27bc54b3006: Lost carrier May 10 02:17:42.477000 audit[6705]: NETFILTER_CFG table=filter:144 family=2 entries=40 op=nft_register_rule pid=6705 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 10 02:17:42.477000 audit[6705]: SYSCALL arch=c000003e syscall=46 success=yes exit=5348 a0=3 a1=7ffdc4ab05a0 a2=0 a3=7ffdc4ab058c items=0 ppid=6493 pid=6705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:42.477000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 10 02:17:42.478000 audit[6705]: NETFILTER_CFG table=filter:145 family=2 entries=2 op=nft_unregister_chain pid=6705 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 10 02:17:42.478000 audit[6705]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffdc4ab05a0 a2=0 a3=561905c64000 items=0 ppid=6493 pid=6705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:42.478000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 10 02:17:42.574453 kubelet[2271]: I0510 02:17:42.574356 2271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" May 10 02:17:42.738721 sshd[6596]: pam_unix(sshd:session): session closed for user core May 10 02:17:42.740000 audit[6596]: USER_END pid=6596 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:42.740000 audit[6596]: CRED_DISP pid=6596 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:42.744209 systemd[1]: sshd@12-10.230.33.70:22-139.178.68.195:49002.service: Deactivated successfully. May 10 02:17:42.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.33.70:22-139.178.68.195:49002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:42.750706 systemd[1]: session-13.scope: Deactivated successfully. May 10 02:17:42.754208 systemd-logind[1288]: Session 13 logged out. Waiting for processes to exit. May 10 02:17:42.757578 systemd-logind[1288]: Removed session 13. May 10 02:17:42.770918 env[1300]: 2025-05-10 02:17:42.402 [INFO][6693] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" May 10 02:17:42.770918 env[1300]: 2025-05-10 02:17:42.406 [INFO][6693] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" iface="eth0" netns="/var/run/netns/cni-a6a8fea9-9473-59d6-7c55-56bf52ae1fab" May 10 02:17:42.770918 env[1300]: 2025-05-10 02:17:42.406 [INFO][6693] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" iface="eth0" netns="/var/run/netns/cni-a6a8fea9-9473-59d6-7c55-56bf52ae1fab" May 10 02:17:42.770918 env[1300]: 2025-05-10 02:17:42.459 [INFO][6693] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" after=53.214556ms iface="eth0" netns="/var/run/netns/cni-a6a8fea9-9473-59d6-7c55-56bf52ae1fab" May 10 02:17:42.770918 env[1300]: 2025-05-10 02:17:42.459 [INFO][6693] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" May 10 02:17:42.770918 env[1300]: 2025-05-10 02:17:42.459 [INFO][6693] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" May 10 02:17:42.770918 env[1300]: 2025-05-10 02:17:42.668 [INFO][6704] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" HandleID="k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:17:42.770918 env[1300]: 2025-05-10 02:17:42.669 [INFO][6704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:42.770918 env[1300]: 2025-05-10 02:17:42.670 [INFO][6704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:42.770918 env[1300]: 2025-05-10 02:17:42.760 [INFO][6704] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" HandleID="k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:17:42.770918 env[1300]: 2025-05-10 02:17:42.761 [INFO][6704] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" HandleID="k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:17:42.770918 env[1300]: 2025-05-10 02:17:42.763 [INFO][6704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:42.770918 env[1300]: 2025-05-10 02:17:42.766 [INFO][6693] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" May 10 02:17:42.780534 env[1300]: time="2025-05-10T02:17:42.777724331Z" level=info msg="TearDown network for sandbox \"7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731\" successfully" May 10 02:17:42.780534 env[1300]: time="2025-05-10T02:17:42.777784946Z" level=info msg="StopPodSandbox for \"7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731\" returns successfully" May 10 02:17:42.777208 systemd[1]: run-netns-cni\x2da6a8fea9\x2d9473\x2d59d6\x2d7c55\x2d56bf52ae1fab.mount: Deactivated successfully. May 10 02:17:42.782423 env[1300]: time="2025-05-10T02:17:42.782385297Z" level=info msg="StopPodSandbox for \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\"" May 10 02:17:42.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.33.70:22-139.178.68.195:49006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:42.887313 systemd[1]: Started sshd@13-10.230.33.70:22-139.178.68.195:49006.service. May 10 02:17:42.946671 env[1300]: 2025-05-10 02:17:42.871 [WARNING][6726] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0", GenerateName:"calico-apiserver-59c6df465c-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd3e93bd-3e59-43f7-987b-d85581ad5591", ResourceVersion:"1286", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 2, 16, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6df465c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-it8yl.gb1.brightbox.com", ContainerID:"7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731", Pod:"calico-apiserver-59c6df465c-9n96s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27bc54b3006", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 02:17:42.946671 env[1300]: 2025-05-10 02:17:42.874 [INFO][6726] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:17:42.946671 env[1300]: 2025-05-10 02:17:42.874 [INFO][6726] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" iface="eth0" netns="" May 10 02:17:42.946671 env[1300]: 2025-05-10 02:17:42.874 [INFO][6726] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:17:42.946671 env[1300]: 2025-05-10 02:17:42.874 [INFO][6726] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:17:42.946671 env[1300]: 2025-05-10 02:17:42.923 [INFO][6733] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" HandleID="k8s-pod-network.f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:17:42.946671 env[1300]: 2025-05-10 02:17:42.924 [INFO][6733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:17:42.946671 env[1300]: 2025-05-10 02:17:42.924 [INFO][6733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:17:42.946671 env[1300]: 2025-05-10 02:17:42.938 [WARNING][6733] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" HandleID="k8s-pod-network.f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:17:42.946671 env[1300]: 2025-05-10 02:17:42.938 [INFO][6733] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" HandleID="k8s-pod-network.f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:17:42.946671 env[1300]: 2025-05-10 02:17:42.940 [INFO][6733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:17:42.946671 env[1300]: 2025-05-10 02:17:42.943 [INFO][6726] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:17:42.949169 env[1300]: time="2025-05-10T02:17:42.946727691Z" level=info msg="TearDown network for sandbox \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\" successfully" May 10 02:17:42.949169 env[1300]: time="2025-05-10T02:17:42.946787310Z" level=info msg="StopPodSandbox for \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\" returns successfully" May 10 02:17:43.021000 audit[6742]: NETFILTER_CFG table=filter:146 family=2 entries=8 op=nft_register_rule pid=6742 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:43.021000 audit[6742]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd66cdef50 a2=0 a3=7ffd66cdef3c items=0 ppid=2455 pid=6742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:43.021000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:43.026000 audit[6742]: NETFILTER_CFG table=nat:147 family=2 entries=36 op=nft_register_rule pid=6742 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:17:43.026000 audit[6742]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffd66cdef50 a2=0 a3=7ffd66cdef3c items=0 ppid=2455 pid=6742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:43.026000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:17:43.077545 kubelet[2271]: I0510 02:17:43.077476 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwqjv\" (UniqueName: \"kubernetes.io/projected/cd3e93bd-3e59-43f7-987b-d85581ad5591-kube-api-access-jwqjv\") pod \"cd3e93bd-3e59-43f7-987b-d85581ad5591\" (UID: \"cd3e93bd-3e59-43f7-987b-d85581ad5591\") " May 10 02:17:43.077862 kubelet[2271]: I0510 02:17:43.077832 2271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cd3e93bd-3e59-43f7-987b-d85581ad5591-calico-apiserver-certs\") pod \"cd3e93bd-3e59-43f7-987b-d85581ad5591\" (UID: \"cd3e93bd-3e59-43f7-987b-d85581ad5591\") " May 10 02:17:43.104000 systemd[1]: var-lib-kubelet-pods-cd3e93bd\x2d3e59\x2d43f7\x2d987b\x2dd85581ad5591-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djwqjv.mount: Deactivated successfully. May 10 02:17:43.114059 systemd[1]: var-lib-kubelet-pods-cd3e93bd\x2d3e59\x2d43f7\x2d987b\x2dd85581ad5591-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 10 02:17:43.122660 kubelet[2271]: I0510 02:17:43.122571 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd3e93bd-3e59-43f7-987b-d85581ad5591-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "cd3e93bd-3e59-43f7-987b-d85581ad5591" (UID: "cd3e93bd-3e59-43f7-987b-d85581ad5591"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 02:17:43.125875 kubelet[2271]: I0510 02:17:43.116325 2271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd3e93bd-3e59-43f7-987b-d85581ad5591-kube-api-access-jwqjv" (OuterVolumeSpecName: "kube-api-access-jwqjv") pod "cd3e93bd-3e59-43f7-987b-d85581ad5591" (UID: "cd3e93bd-3e59-43f7-987b-d85581ad5591"). InnerVolumeSpecName "kube-api-access-jwqjv". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 02:17:43.179693 kubelet[2271]: I0510 02:17:43.179002 2271 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jwqjv\" (UniqueName: \"kubernetes.io/projected/cd3e93bd-3e59-43f7-987b-d85581ad5591-kube-api-access-jwqjv\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:43.179693 kubelet[2271]: I0510 02:17:43.179042 2271 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cd3e93bd-3e59-43f7-987b-d85581ad5591-calico-apiserver-certs\") on node \"srv-it8yl.gb1.brightbox.com\" DevicePath \"\"" May 10 02:17:43.816000 audit[6738]: USER_ACCT pid=6738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:43.821345 sshd[6738]: Accepted publickey for core from 139.178.68.195 port 49006 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:17:43.828732 kernel: kauditd_printk_skb: 508 callbacks suppressed May 10 02:17:43.829689 kernel: audit: type=1101 audit(1746843463.816:589): pid=6738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:43.829000 audit[6738]: CRED_ACQ pid=6738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:43.831483 sshd[6738]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:17:43.841122 kernel: audit: type=1103 audit(1746843463.829:590): pid=6738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:43.841188 kernel: audit: type=1006 audit(1746843463.830:591): pid=6738 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 May 10 02:17:43.830000 audit[6738]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe16026020 a2=3 a3=0 items=0 ppid=1 pid=6738 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:43.848054 kernel: audit: type=1300 audit(1746843463.830:591): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe16026020 a2=3 a3=0 items=0 ppid=1 pid=6738 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:43.830000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:17:43.850959 kernel: audit: type=1327 audit(1746843463.830:591): proctitle=737368643A20636F7265205B707269765D May 10 02:17:43.857908 systemd-logind[1288]: New session 14 of user core. May 10 02:17:43.859610 systemd[1]: Started session-14.scope. May 10 02:17:43.872000 audit[6738]: USER_START pid=6738 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:43.882319 kernel: audit: type=1105 audit(1746843463.872:592): pid=6738 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:43.882441 kernel: audit: type=1103 audit(1746843463.881:593): pid=6746 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:43.881000 audit[6746]: CRED_ACQ pid=6746 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:44.532260 kubelet[2271]: I0510 02:17:44.527766 2271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd3e93bd-3e59-43f7-987b-d85581ad5591" path="/var/lib/kubelet/pods/cd3e93bd-3e59-43f7-987b-d85581ad5591/volumes" May 10 02:17:44.892718 sshd[6738]: pam_unix(sshd:session): session closed for user core May 10 02:17:44.905793 kernel: audit: type=1106 audit(1746843464.894:594): pid=6738 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:44.894000 audit[6738]: USER_END pid=6738 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:44.900866 systemd[1]: sshd@13-10.230.33.70:22-139.178.68.195:49006.service: Deactivated successfully. May 10 02:17:44.902418 systemd[1]: session-14.scope: Deactivated successfully. May 10 02:17:44.907353 systemd-logind[1288]: Session 14 logged out. Waiting for processes to exit. May 10 02:17:44.910208 systemd-logind[1288]: Removed session 14. May 10 02:17:44.894000 audit[6738]: CRED_DISP pid=6738 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:44.920669 kernel: audit: type=1104 audit(1746843464.894:595): pid=6738 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:44.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.33.70:22-139.178.68.195:49006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:44.927651 kernel: audit: type=1131 audit(1746843464.894:596): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.33.70:22-139.178.68.195:49006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:45.036873 systemd[1]: Started sshd@14-10.230.33.70:22-139.178.68.195:49010.service. May 10 02:17:45.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.33.70:22-139.178.68.195:49010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:45.955000 audit[6762]: USER_ACCT pid=6762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:45.957595 sshd[6762]: Accepted publickey for core from 139.178.68.195 port 49010 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:17:45.958000 audit[6762]: CRED_ACQ pid=6762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:45.958000 audit[6762]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea9d0b9f0 a2=3 a3=0 items=0 ppid=1 pid=6762 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:45.958000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:17:45.960717 sshd[6762]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:17:45.968722 systemd-logind[1288]: New session 15 of user core. May 10 02:17:45.970110 systemd[1]: Started session-15.scope. May 10 02:17:45.985000 audit[6762]: USER_START pid=6762 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:45.988000 audit[6765]: CRED_ACQ pid=6765 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:46.734793 sshd[6762]: pam_unix(sshd:session): session closed for user core May 10 02:17:46.740000 audit[6762]: USER_END pid=6762 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:46.740000 audit[6762]: CRED_DISP pid=6762 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:46.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.33.70:22-139.178.68.195:49010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:46.743836 systemd[1]: sshd@14-10.230.33.70:22-139.178.68.195:49010.service: Deactivated successfully. May 10 02:17:46.745932 systemd[1]: session-15.scope: Deactivated successfully. May 10 02:17:46.745995 systemd-logind[1288]: Session 15 logged out. Waiting for processes to exit. May 10 02:17:46.747814 systemd-logind[1288]: Removed session 15. May 10 02:17:51.881386 systemd[1]: Started sshd@15-10.230.33.70:22-139.178.68.195:42326.service. May 10 02:17:51.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.33.70:22-139.178.68.195:42326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:51.889411 kernel: kauditd_printk_skb: 11 callbacks suppressed May 10 02:17:51.890077 kernel: audit: type=1130 audit(1746843471.882:606): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.33.70:22-139.178.68.195:42326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:52.869000 audit[6808]: USER_ACCT pid=6808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:52.881226 sshd[6808]: Accepted publickey for core from 139.178.68.195 port 42326 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:17:52.881971 kernel: audit: type=1101 audit(1746843472.869:607): pid=6808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:52.882000 audit[6808]: CRED_ACQ pid=6808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:52.884959 sshd[6808]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:17:52.890703 kernel: audit: type=1103 audit(1746843472.882:608): pid=6808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:52.895076 kernel: audit: type=1006 audit(1746843472.882:609): pid=6808 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 May 10 02:17:52.895422 kernel: audit: type=1300 audit(1746843472.882:609): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeba5dbf80 a2=3 a3=0 items=0 ppid=1 pid=6808 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:52.882000 audit[6808]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeba5dbf80 a2=3 a3=0 items=0 ppid=1 pid=6808 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:52.882000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:17:52.905709 kernel: audit: type=1327 audit(1746843472.882:609): proctitle=737368643A20636F7265205B707269765D May 10 02:17:52.912197 systemd-logind[1288]: New session 16 of user core. May 10 02:17:52.912483 systemd[1]: Started session-16.scope. May 10 02:17:52.925000 audit[6808]: USER_START pid=6808 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:52.938958 kernel: audit: type=1105 audit(1746843472.925:610): pid=6808 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:52.933000 audit[6813]: CRED_ACQ pid=6813 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:52.948609 kernel: audit: type=1103 audit(1746843472.933:611): pid=6813 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:53.766017 sshd[6808]: pam_unix(sshd:session): session closed for user core May 10 02:17:53.782275 kernel: audit: type=1106 audit(1746843473.768:612): pid=6808 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:53.768000 audit[6808]: USER_END pid=6808 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:53.774780 systemd[1]: sshd@15-10.230.33.70:22-139.178.68.195:42326.service: Deactivated successfully. May 10 02:17:53.776534 systemd[1]: session-16.scope: Deactivated successfully. May 10 02:17:53.783784 systemd-logind[1288]: Session 16 logged out. Waiting for processes to exit. May 10 02:17:53.786275 systemd-logind[1288]: Removed session 16. May 10 02:17:53.768000 audit[6808]: CRED_DISP pid=6808 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:53.796660 kernel: audit: type=1104 audit(1746843473.768:613): pid=6808 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:53.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.33.70:22-139.178.68.195:42326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:58.910022 systemd[1]: Started sshd@16-10.230.33.70:22-139.178.68.195:34638.service. May 10 02:17:58.922929 kernel: kauditd_printk_skb: 1 callbacks suppressed May 10 02:17:58.923092 kernel: audit: type=1130 audit(1746843478.909:615): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.33.70:22-139.178.68.195:34638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:58.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.33.70:22-139.178.68.195:34638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:17:59.823000 audit[6842]: USER_ACCT pid=6842 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:59.831946 kernel: audit: type=1101 audit(1746843479.823:616): pid=6842 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:59.832035 sshd[6842]: Accepted publickey for core from 139.178.68.195 port 34638 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:17:59.833000 audit[6842]: CRED_ACQ pid=6842 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:59.845654 kernel: audit: type=1103 audit(1746843479.833:617): pid=6842 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:59.845736 kernel: audit: type=1006 audit(1746843479.833:618): pid=6842 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 May 10 02:17:59.841557 sshd[6842]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:17:59.853598 kernel: audit: type=1300 audit(1746843479.833:618): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc76759b30 a2=3 a3=0 items=0 ppid=1 pid=6842 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:59.833000 audit[6842]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc76759b30 a2=3 a3=0 items=0 ppid=1 pid=6842 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:17:59.833000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:17:59.858672 kernel: audit: type=1327 audit(1746843479.833:618): proctitle=737368643A20636F7265205B707269765D May 10 02:17:59.864927 systemd-logind[1288]: New session 17 of user core. May 10 02:17:59.866194 systemd[1]: Started session-17.scope. May 10 02:17:59.890803 kernel: audit: type=1105 audit(1746843479.880:619): pid=6842 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:59.880000 audit[6842]: USER_START pid=6842 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:59.891000 audit[6845]: CRED_ACQ pid=6845 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:17:59.899728 kernel: audit: type=1103 audit(1746843479.891:620): pid=6845 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:00.782459 sshd[6842]: pam_unix(sshd:session): session closed for user core May 10 02:18:00.797686 kernel: audit: type=1106 audit(1746843480.787:621): pid=6842 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:00.787000 audit[6842]: USER_END pid=6842 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:00.798215 systemd[1]: sshd@16-10.230.33.70:22-139.178.68.195:34638.service: Deactivated successfully. May 10 02:18:00.799486 systemd[1]: session-17.scope: Deactivated successfully. May 10 02:18:00.795000 audit[6842]: CRED_DISP pid=6842 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:00.815448 kernel: audit: type=1104 audit(1746843480.795:622): pid=6842 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:00.812018 systemd-logind[1288]: Session 17 logged out. Waiting for processes to exit. May 10 02:18:00.816159 systemd-logind[1288]: Removed session 17. May 10 02:18:00.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.33.70:22-139.178.68.195:34638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:03.816343 kubelet[2271]: I0510 02:18:03.816264 2271 scope.go:117] "RemoveContainer" containerID="bb8a5bb84c6523eb82d684c52018fb61fd4509c76271b7546e5c06e3307364d5" May 10 02:18:03.825277 env[1300]: time="2025-05-10T02:18:03.824560119Z" level=info msg="RemoveContainer for \"bb8a5bb84c6523eb82d684c52018fb61fd4509c76271b7546e5c06e3307364d5\"" May 10 02:18:03.832235 env[1300]: time="2025-05-10T02:18:03.832199452Z" level=info msg="RemoveContainer for \"bb8a5bb84c6523eb82d684c52018fb61fd4509c76271b7546e5c06e3307364d5\" returns successfully" May 10 02:18:03.832740 kubelet[2271]: I0510 02:18:03.832707 2271 scope.go:117] "RemoveContainer" containerID="c671a123216c2968f1584803d7b02d407eaf29c0db19eb524083ccb7448bfbc1" May 10 02:18:03.834458 env[1300]: time="2025-05-10T02:18:03.834381303Z" level=info msg="RemoveContainer for \"c671a123216c2968f1584803d7b02d407eaf29c0db19eb524083ccb7448bfbc1\"" May 10 02:18:03.838219 env[1300]: time="2025-05-10T02:18:03.838172485Z" level=info msg="RemoveContainer for \"c671a123216c2968f1584803d7b02d407eaf29c0db19eb524083ccb7448bfbc1\" returns successfully" May 10 02:18:03.838492 kubelet[2271]: I0510 02:18:03.838444 2271 scope.go:117] "RemoveContainer" containerID="991459f3270f0e0088ce6bf0b9879acad4af013408a99faaf796c87169795ef4" May 10 02:18:03.840368 env[1300]: time="2025-05-10T02:18:03.840324580Z" level=info msg="RemoveContainer for \"991459f3270f0e0088ce6bf0b9879acad4af013408a99faaf796c87169795ef4\"" May 10 02:18:03.844601 env[1300]: time="2025-05-10T02:18:03.844499635Z" level=info msg="RemoveContainer for \"991459f3270f0e0088ce6bf0b9879acad4af013408a99faaf796c87169795ef4\" returns successfully" May 10 02:18:03.846292 env[1300]: time="2025-05-10T02:18:03.846258046Z" level=info msg="StopPodSandbox for \"61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0\"" May 10 02:18:04.110272 env[1300]: 2025-05-10 02:18:04.003 [WARNING][6878] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:18:04.110272 env[1300]: 2025-05-10 02:18:04.004 [INFO][6878] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" May 10 02:18:04.110272 env[1300]: 2025-05-10 02:18:04.004 [INFO][6878] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" iface="eth0" netns="" May 10 02:18:04.110272 env[1300]: 2025-05-10 02:18:04.004 [INFO][6878] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" May 10 02:18:04.110272 env[1300]: 2025-05-10 02:18:04.004 [INFO][6878] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" May 10 02:18:04.110272 env[1300]: 2025-05-10 02:18:04.090 [INFO][6886] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" HandleID="k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:18:04.110272 env[1300]: 2025-05-10 02:18:04.091 [INFO][6886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:18:04.110272 env[1300]: 2025-05-10 02:18:04.091 [INFO][6886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:18:04.110272 env[1300]: 2025-05-10 02:18:04.102 [WARNING][6886] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" HandleID="k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:18:04.110272 env[1300]: 2025-05-10 02:18:04.102 [INFO][6886] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" HandleID="k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:18:04.110272 env[1300]: 2025-05-10 02:18:04.105 [INFO][6886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:18:04.110272 env[1300]: 2025-05-10 02:18:04.107 [INFO][6878] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" May 10 02:18:04.111974 env[1300]: time="2025-05-10T02:18:04.111576552Z" level=info msg="TearDown network for sandbox \"61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0\" successfully" May 10 02:18:04.111974 env[1300]: time="2025-05-10T02:18:04.111651321Z" level=info msg="StopPodSandbox for \"61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0\" returns successfully" May 10 02:18:04.113000 env[1300]: time="2025-05-10T02:18:04.112962663Z" level=info msg="RemovePodSandbox for \"61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0\"" May 10 02:18:04.113263 env[1300]: time="2025-05-10T02:18:04.113171120Z" level=info msg="Forcibly stopping sandbox \"61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0\"" May 10 02:18:04.251011 env[1300]: 2025-05-10 02:18:04.195 [WARNING][6904] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:18:04.251011 env[1300]: 2025-05-10 02:18:04.195 [INFO][6904] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" May 10 02:18:04.251011 env[1300]: 2025-05-10 02:18:04.195 [INFO][6904] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" iface="eth0" netns="" May 10 02:18:04.251011 env[1300]: 2025-05-10 02:18:04.196 [INFO][6904] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" May 10 02:18:04.251011 env[1300]: 2025-05-10 02:18:04.196 [INFO][6904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" May 10 02:18:04.251011 env[1300]: 2025-05-10 02:18:04.233 [INFO][6911] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" HandleID="k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:18:04.251011 env[1300]: 2025-05-10 02:18:04.234 [INFO][6911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:18:04.251011 env[1300]: 2025-05-10 02:18:04.235 [INFO][6911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:18:04.251011 env[1300]: 2025-05-10 02:18:04.244 [WARNING][6911] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" HandleID="k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:18:04.251011 env[1300]: 2025-05-10 02:18:04.244 [INFO][6911] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" HandleID="k8s-pod-network.61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--kube--controllers--6d9c5fc9f8--m6gc5-eth0" May 10 02:18:04.251011 env[1300]: 2025-05-10 02:18:04.246 [INFO][6911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:18:04.251011 env[1300]: 2025-05-10 02:18:04.248 [INFO][6904] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0" May 10 02:18:04.252463 env[1300]: time="2025-05-10T02:18:04.251059569Z" level=info msg="TearDown network for sandbox \"61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0\" successfully" May 10 02:18:04.255916 env[1300]: time="2025-05-10T02:18:04.255871274Z" level=info msg="RemovePodSandbox \"61dc46d65b73f784673cede723d9f0004671d82eeeb0527a79d6fa7fa2d7e8c0\" returns successfully" May 10 02:18:04.256579 env[1300]: time="2025-05-10T02:18:04.256525091Z" level=info msg="StopPodSandbox for \"3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2\"" May 10 02:18:04.256755 env[1300]: time="2025-05-10T02:18:04.256687680Z" level=info msg="TearDown network for sandbox \"3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2\" successfully" May 10 02:18:04.256848 env[1300]: time="2025-05-10T02:18:04.256752865Z" level=info msg="StopPodSandbox for \"3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2\" returns successfully" May 10 02:18:04.257481 env[1300]: time="2025-05-10T02:18:04.257445827Z" level=info msg="RemovePodSandbox for \"3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2\"" May 10 02:18:04.257743 env[1300]: time="2025-05-10T02:18:04.257660547Z" level=info msg="Forcibly stopping sandbox \"3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2\"" May 10 02:18:04.258162 env[1300]: time="2025-05-10T02:18:04.258118425Z" level=info msg="TearDown network for sandbox \"3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2\" successfully" May 10 02:18:04.262694 env[1300]: time="2025-05-10T02:18:04.262658958Z" level=info msg="RemovePodSandbox \"3b8c2091a00f30dcc4c31aeffccd9542d8259ec2f104781847242cc8aae42dd2\" returns successfully" May 10 02:18:04.263398 env[1300]: time="2025-05-10T02:18:04.263331069Z" level=info msg="StopPodSandbox for \"7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731\"" May 10 02:18:04.375015 env[1300]: 2025-05-10 02:18:04.324 [WARNING][6930] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:18:04.375015 env[1300]: 2025-05-10 02:18:04.324 [INFO][6930] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" May 10 02:18:04.375015 env[1300]: 2025-05-10 02:18:04.324 [INFO][6930] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" iface="eth0" netns="" May 10 02:18:04.375015 env[1300]: 2025-05-10 02:18:04.324 [INFO][6930] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" May 10 02:18:04.375015 env[1300]: 2025-05-10 02:18:04.324 [INFO][6930] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" May 10 02:18:04.375015 env[1300]: 2025-05-10 02:18:04.358 [INFO][6937] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" HandleID="k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:18:04.375015 env[1300]: 2025-05-10 02:18:04.358 [INFO][6937] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:18:04.375015 env[1300]: 2025-05-10 02:18:04.359 [INFO][6937] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:18:04.375015 env[1300]: 2025-05-10 02:18:04.368 [WARNING][6937] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" HandleID="k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:18:04.375015 env[1300]: 2025-05-10 02:18:04.368 [INFO][6937] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" HandleID="k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:18:04.375015 env[1300]: 2025-05-10 02:18:04.370 [INFO][6937] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:18:04.375015 env[1300]: 2025-05-10 02:18:04.372 [INFO][6930] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" May 10 02:18:04.377458 env[1300]: time="2025-05-10T02:18:04.376735985Z" level=info msg="TearDown network for sandbox \"7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731\" successfully" May 10 02:18:04.377458 env[1300]: time="2025-05-10T02:18:04.376784021Z" level=info msg="StopPodSandbox for \"7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731\" returns successfully" May 10 02:18:04.378476 env[1300]: time="2025-05-10T02:18:04.378429514Z" level=info msg="RemovePodSandbox for \"7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731\"" May 10 02:18:04.378712 env[1300]: time="2025-05-10T02:18:04.378490130Z" level=info msg="Forcibly stopping sandbox \"7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731\"" May 10 02:18:04.481170 env[1300]: 2025-05-10 02:18:04.432 [WARNING][6955] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:18:04.481170 env[1300]: 2025-05-10 02:18:04.432 [INFO][6955] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" May 10 02:18:04.481170 env[1300]: 2025-05-10 02:18:04.432 [INFO][6955] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" iface="eth0" netns="" May 10 02:18:04.481170 env[1300]: 2025-05-10 02:18:04.433 [INFO][6955] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" May 10 02:18:04.481170 env[1300]: 2025-05-10 02:18:04.433 [INFO][6955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" May 10 02:18:04.481170 env[1300]: 2025-05-10 02:18:04.465 [INFO][6962] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" HandleID="k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:18:04.481170 env[1300]: 2025-05-10 02:18:04.465 [INFO][6962] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:18:04.481170 env[1300]: 2025-05-10 02:18:04.465 [INFO][6962] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:18:04.481170 env[1300]: 2025-05-10 02:18:04.475 [WARNING][6962] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" HandleID="k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:18:04.481170 env[1300]: 2025-05-10 02:18:04.475 [INFO][6962] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" HandleID="k8s-pod-network.7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:18:04.481170 env[1300]: 2025-05-10 02:18:04.477 [INFO][6962] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:18:04.481170 env[1300]: 2025-05-10 02:18:04.479 [INFO][6955] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731" May 10 02:18:04.482359 env[1300]: time="2025-05-10T02:18:04.481221161Z" level=info msg="TearDown network for sandbox \"7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731\" successfully" May 10 02:18:04.486830 env[1300]: time="2025-05-10T02:18:04.486730319Z" level=info msg="RemovePodSandbox \"7cb1ed991778181a413a74bcfde110cfce6ad1873bdef390aececaa8361b9731\" returns successfully" May 10 02:18:04.487484 env[1300]: time="2025-05-10T02:18:04.487447822Z" level=info msg="StopPodSandbox for \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\"" May 10 02:18:04.589666 env[1300]: 2025-05-10 02:18:04.543 [WARNING][6980] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:18:04.589666 env[1300]: 2025-05-10 02:18:04.543 [INFO][6980] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:18:04.589666 env[1300]: 2025-05-10 02:18:04.543 [INFO][6980] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" iface="eth0" netns="" May 10 02:18:04.589666 env[1300]: 2025-05-10 02:18:04.543 [INFO][6980] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:18:04.589666 env[1300]: 2025-05-10 02:18:04.543 [INFO][6980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:18:04.589666 env[1300]: 2025-05-10 02:18:04.575 [INFO][6988] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" HandleID="k8s-pod-network.f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:18:04.589666 env[1300]: 2025-05-10 02:18:04.575 [INFO][6988] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:18:04.589666 env[1300]: 2025-05-10 02:18:04.575 [INFO][6988] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:18:04.589666 env[1300]: 2025-05-10 02:18:04.584 [WARNING][6988] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" HandleID="k8s-pod-network.f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:18:04.589666 env[1300]: 2025-05-10 02:18:04.584 [INFO][6988] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" HandleID="k8s-pod-network.f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:18:04.589666 env[1300]: 2025-05-10 02:18:04.586 [INFO][6988] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:18:04.589666 env[1300]: 2025-05-10 02:18:04.587 [INFO][6980] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:18:04.591179 env[1300]: time="2025-05-10T02:18:04.589710595Z" level=info msg="TearDown network for sandbox \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\" successfully" May 10 02:18:04.591179 env[1300]: time="2025-05-10T02:18:04.589759121Z" level=info msg="StopPodSandbox for \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\" returns successfully" May 10 02:18:04.591179 env[1300]: time="2025-05-10T02:18:04.590323973Z" level=info msg="RemovePodSandbox for \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\"" May 10 02:18:04.591179 env[1300]: time="2025-05-10T02:18:04.590363971Z" level=info msg="Forcibly stopping sandbox \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\"" May 10 02:18:04.702319 env[1300]: 2025-05-10 02:18:04.644 [WARNING][7006] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:18:04.702319 env[1300]: 2025-05-10 02:18:04.644 [INFO][7006] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:18:04.702319 env[1300]: 2025-05-10 02:18:04.644 [INFO][7006] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" iface="eth0" netns="" May 10 02:18:04.702319 env[1300]: 2025-05-10 02:18:04.644 [INFO][7006] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:18:04.702319 env[1300]: 2025-05-10 02:18:04.644 [INFO][7006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:18:04.702319 env[1300]: 2025-05-10 02:18:04.687 [INFO][7013] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" HandleID="k8s-pod-network.f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:18:04.702319 env[1300]: 2025-05-10 02:18:04.688 [INFO][7013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:18:04.702319 env[1300]: 2025-05-10 02:18:04.688 [INFO][7013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:18:04.702319 env[1300]: 2025-05-10 02:18:04.696 [WARNING][7013] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" HandleID="k8s-pod-network.f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:18:04.702319 env[1300]: 2025-05-10 02:18:04.696 [INFO][7013] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" HandleID="k8s-pod-network.f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--9n96s-eth0" May 10 02:18:04.702319 env[1300]: 2025-05-10 02:18:04.698 [INFO][7013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:18:04.702319 env[1300]: 2025-05-10 02:18:04.700 [INFO][7006] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941" May 10 02:18:04.704102 env[1300]: time="2025-05-10T02:18:04.702276982Z" level=info msg="TearDown network for sandbox \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\" successfully" May 10 02:18:04.707294 env[1300]: time="2025-05-10T02:18:04.707258818Z" level=info msg="RemovePodSandbox \"f2ee40744c6b804970c50c0484a3342d87527aa8c1964e8ab539349debaea941\" returns successfully" May 10 02:18:04.708051 env[1300]: time="2025-05-10T02:18:04.708013568Z" level=info msg="StopPodSandbox for \"7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3\"" May 10 02:18:04.708226 env[1300]: time="2025-05-10T02:18:04.708121166Z" level=info msg="TearDown network for sandbox \"7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3\" successfully" May 10 02:18:04.708329 env[1300]: time="2025-05-10T02:18:04.708226739Z" level=info msg="StopPodSandbox for \"7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3\" returns successfully" May 10 02:18:04.708606 env[1300]: time="2025-05-10T02:18:04.708572067Z" level=info msg="RemovePodSandbox for \"7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3\"" May 10 02:18:04.708733 env[1300]: time="2025-05-10T02:18:04.708612670Z" level=info msg="Forcibly stopping sandbox \"7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3\"" May 10 02:18:04.708802 env[1300]: time="2025-05-10T02:18:04.708725990Z" level=info msg="TearDown network for sandbox \"7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3\" successfully" May 10 02:18:04.712730 env[1300]: time="2025-05-10T02:18:04.712694024Z" level=info msg="RemovePodSandbox \"7daef9635c85de439c8bc44dbc42788e240a4f261f85e3e72d7f6d02e261f0f3\" returns successfully" May 10 02:18:04.713317 env[1300]: time="2025-05-10T02:18:04.713279336Z" level=info msg="StopPodSandbox for \"28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5\"" May 10 02:18:04.822397 env[1300]: 2025-05-10 02:18:04.765 [WARNING][7031] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:18:04.822397 env[1300]: 2025-05-10 02:18:04.767 [INFO][7031] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" May 10 02:18:04.822397 env[1300]: 2025-05-10 02:18:04.767 [INFO][7031] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" iface="eth0" netns="" May 10 02:18:04.822397 env[1300]: 2025-05-10 02:18:04.767 [INFO][7031] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" May 10 02:18:04.822397 env[1300]: 2025-05-10 02:18:04.767 [INFO][7031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" May 10 02:18:04.822397 env[1300]: 2025-05-10 02:18:04.807 [INFO][7038] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" HandleID="k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:18:04.822397 env[1300]: 2025-05-10 02:18:04.807 [INFO][7038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:18:04.822397 env[1300]: 2025-05-10 02:18:04.807 [INFO][7038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:18:04.822397 env[1300]: 2025-05-10 02:18:04.816 [WARNING][7038] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" HandleID="k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:18:04.822397 env[1300]: 2025-05-10 02:18:04.816 [INFO][7038] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" HandleID="k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:18:04.822397 env[1300]: 2025-05-10 02:18:04.818 [INFO][7038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:18:04.822397 env[1300]: 2025-05-10 02:18:04.820 [INFO][7031] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" May 10 02:18:04.824185 env[1300]: time="2025-05-10T02:18:04.822430770Z" level=info msg="TearDown network for sandbox \"28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5\" successfully" May 10 02:18:04.824185 env[1300]: time="2025-05-10T02:18:04.822477122Z" level=info msg="StopPodSandbox for \"28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5\" returns successfully" May 10 02:18:04.824745 env[1300]: time="2025-05-10T02:18:04.824707601Z" level=info msg="RemovePodSandbox for \"28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5\"" May 10 02:18:04.825129 env[1300]: time="2025-05-10T02:18:04.825062765Z" level=info msg="Forcibly stopping sandbox \"28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5\"" May 10 02:18:04.939166 env[1300]: 2025-05-10 02:18:04.884 [WARNING][7056] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:18:04.939166 env[1300]: 2025-05-10 02:18:04.884 [INFO][7056] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" May 10 02:18:04.939166 env[1300]: 2025-05-10 02:18:04.884 [INFO][7056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" iface="eth0" netns="" May 10 02:18:04.939166 env[1300]: 2025-05-10 02:18:04.884 [INFO][7056] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" May 10 02:18:04.939166 env[1300]: 2025-05-10 02:18:04.884 [INFO][7056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" May 10 02:18:04.939166 env[1300]: 2025-05-10 02:18:04.921 [INFO][7063] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" HandleID="k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:18:04.939166 env[1300]: 2025-05-10 02:18:04.924 [INFO][7063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:18:04.939166 env[1300]: 2025-05-10 02:18:04.924 [INFO][7063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:18:04.939166 env[1300]: 2025-05-10 02:18:04.932 [WARNING][7063] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" HandleID="k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:18:04.939166 env[1300]: 2025-05-10 02:18:04.932 [INFO][7063] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" HandleID="k8s-pod-network.28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:18:04.939166 env[1300]: 2025-05-10 02:18:04.934 [INFO][7063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:18:04.939166 env[1300]: 2025-05-10 02:18:04.936 [INFO][7056] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5" May 10 02:18:04.940981 env[1300]: time="2025-05-10T02:18:04.939207534Z" level=info msg="TearDown network for sandbox \"28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5\" successfully" May 10 02:18:04.943786 env[1300]: time="2025-05-10T02:18:04.943749852Z" level=info msg="RemovePodSandbox \"28497987eadda0d07c51f1371518582c32413fae1f88fed37a5425bba7d7c5c5\" returns successfully" May 10 02:18:04.944485 env[1300]: time="2025-05-10T02:18:04.944449421Z" level=info msg="StopPodSandbox for \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\"" May 10 02:18:05.044232 env[1300]: 2025-05-10 02:18:05.003 [WARNING][7083] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:18:05.044232 env[1300]: 2025-05-10 02:18:05.003 [INFO][7083] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:18:05.044232 env[1300]: 2025-05-10 02:18:05.003 [INFO][7083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" iface="eth0" netns="" May 10 02:18:05.044232 env[1300]: 2025-05-10 02:18:05.003 [INFO][7083] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:18:05.044232 env[1300]: 2025-05-10 02:18:05.003 [INFO][7083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:18:05.044232 env[1300]: 2025-05-10 02:18:05.029 [INFO][7091] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" HandleID="k8s-pod-network.a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:18:05.044232 env[1300]: 2025-05-10 02:18:05.029 [INFO][7091] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:18:05.044232 env[1300]: 2025-05-10 02:18:05.029 [INFO][7091] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:18:05.044232 env[1300]: 2025-05-10 02:18:05.038 [WARNING][7091] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" HandleID="k8s-pod-network.a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:18:05.044232 env[1300]: 2025-05-10 02:18:05.038 [INFO][7091] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" HandleID="k8s-pod-network.a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:18:05.044232 env[1300]: 2025-05-10 02:18:05.040 [INFO][7091] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:18:05.044232 env[1300]: 2025-05-10 02:18:05.041 [INFO][7083] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:18:05.046989 env[1300]: time="2025-05-10T02:18:05.045726288Z" level=info msg="TearDown network for sandbox \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\" successfully" May 10 02:18:05.046989 env[1300]: time="2025-05-10T02:18:05.045774066Z" level=info msg="StopPodSandbox for \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\" returns successfully" May 10 02:18:05.046989 env[1300]: time="2025-05-10T02:18:05.046570850Z" level=info msg="RemovePodSandbox for \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\"" May 10 02:18:05.046989 env[1300]: time="2025-05-10T02:18:05.046614472Z" level=info msg="Forcibly stopping sandbox \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\"" May 10 02:18:05.145162 env[1300]: 2025-05-10 02:18:05.095 [WARNING][7109] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" WorkloadEndpoint="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:18:05.145162 env[1300]: 2025-05-10 02:18:05.096 [INFO][7109] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:18:05.145162 env[1300]: 2025-05-10 02:18:05.096 [INFO][7109] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" iface="eth0" netns="" May 10 02:18:05.145162 env[1300]: 2025-05-10 02:18:05.096 [INFO][7109] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:18:05.145162 env[1300]: 2025-05-10 02:18:05.096 [INFO][7109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:18:05.145162 env[1300]: 2025-05-10 02:18:05.131 [INFO][7116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" HandleID="k8s-pod-network.a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:18:05.145162 env[1300]: 2025-05-10 02:18:05.131 [INFO][7116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 02:18:05.145162 env[1300]: 2025-05-10 02:18:05.131 [INFO][7116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 02:18:05.145162 env[1300]: 2025-05-10 02:18:05.139 [WARNING][7116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" HandleID="k8s-pod-network.a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:18:05.145162 env[1300]: 2025-05-10 02:18:05.139 [INFO][7116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" HandleID="k8s-pod-network.a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" Workload="srv--it8yl.gb1.brightbox.com-k8s-calico--apiserver--59c6df465c--t7qd5-eth0" May 10 02:18:05.145162 env[1300]: 2025-05-10 02:18:05.141 [INFO][7116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 02:18:05.145162 env[1300]: 2025-05-10 02:18:05.143 [INFO][7109] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d" May 10 02:18:05.146073 env[1300]: time="2025-05-10T02:18:05.145207356Z" level=info msg="TearDown network for sandbox \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\" successfully" May 10 02:18:05.149361 env[1300]: time="2025-05-10T02:18:05.149295242Z" level=info msg="RemovePodSandbox \"a442eaab8c7b44cad9a59109180221c0021360d592a3f1c9a20ba4c0ce71548d\" returns successfully" May 10 02:18:05.925219 systemd[1]: Started sshd@17-10.230.33.70:22-139.178.68.195:44052.service. May 10 02:18:05.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.33.70:22-139.178.68.195:44052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:05.930847 kernel: kauditd_printk_skb: 1 callbacks suppressed May 10 02:18:05.930969 kernel: audit: type=1130 audit(1746843485.925:624): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.33.70:22-139.178.68.195:44052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:06.874000 audit[7122]: USER_ACCT pid=7122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:06.875258 sshd[7122]: Accepted publickey for core from 139.178.68.195 port 44052 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:18:06.880000 audit[7122]: CRED_ACQ pid=7122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:06.883910 sshd[7122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:18:06.888463 kernel: audit: type=1101 audit(1746843486.874:625): pid=7122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:06.888574 kernel: audit: type=1103 audit(1746843486.880:626): pid=7122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:06.896675 kernel: audit: type=1006 audit(1746843486.880:627): pid=7122 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 May 10 02:18:06.880000 audit[7122]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5b595cc0 a2=3 a3=0 items=0 ppid=1 pid=7122 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:06.880000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:18:06.906741 kernel: audit: type=1300 audit(1746843486.880:627): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5b595cc0 a2=3 a3=0 items=0 ppid=1 pid=7122 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:06.906854 kernel: audit: type=1327 audit(1746843486.880:627): proctitle=737368643A20636F7265205B707269765D May 10 02:18:06.911459 systemd-logind[1288]: New session 18 of user core. May 10 02:18:06.913957 systemd[1]: Started session-18.scope. May 10 02:18:06.925000 audit[7122]: USER_START pid=7122 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:06.935723 kernel: audit: type=1105 audit(1746843486.925:628): pid=7122 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:06.935856 kernel: audit: type=1103 audit(1746843486.933:629): pid=7125 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:06.933000 audit[7125]: CRED_ACQ pid=7125 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:07.935891 sshd[7122]: pam_unix(sshd:session): session closed for user core May 10 02:18:07.937000 audit[7122]: USER_END pid=7122 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:07.951663 kernel: audit: type=1106 audit(1746843487.937:630): pid=7122 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:07.950000 audit[7122]: CRED_DISP pid=7122 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:07.954130 systemd[1]: sshd@17-10.230.33.70:22-139.178.68.195:44052.service: Deactivated successfully. May 10 02:18:07.956031 systemd[1]: session-18.scope: Deactivated successfully. May 10 02:18:07.958825 kernel: audit: type=1104 audit(1746843487.950:631): pid=7122 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:07.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.33.70:22-139.178.68.195:44052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:07.960403 systemd-logind[1288]: Session 18 logged out. Waiting for processes to exit. May 10 02:18:07.962273 systemd-logind[1288]: Removed session 18. May 10 02:18:13.088479 kernel: kauditd_printk_skb: 1 callbacks suppressed May 10 02:18:13.088755 kernel: audit: type=1130 audit(1746843493.075:633): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.33.70:22-139.178.68.195:44058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:13.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.33.70:22-139.178.68.195:44058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:13.076845 systemd[1]: Started sshd@18-10.230.33.70:22-139.178.68.195:44058.service. May 10 02:18:14.009259 kernel: audit: type=1101 audit(1746843493.986:634): pid=7135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:14.013116 kernel: audit: type=1103 audit(1746843494.008:635): pid=7135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:13.986000 audit[7135]: USER_ACCT pid=7135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:14.008000 audit[7135]: CRED_ACQ pid=7135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:14.013609 sshd[7135]: Accepted publickey for core from 139.178.68.195 port 44058 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:18:14.010921 sshd[7135]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:18:14.025150 kernel: audit: type=1006 audit(1746843494.008:636): pid=7135 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 May 10 02:18:14.025282 kernel: audit: type=1300 audit(1746843494.008:636): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff26af6e80 a2=3 a3=0 items=0 ppid=1 pid=7135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:14.008000 audit[7135]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff26af6e80 a2=3 a3=0 items=0 ppid=1 pid=7135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:14.036668 kernel: audit: type=1327 audit(1746843494.008:636): proctitle=737368643A20636F7265205B707269765D May 10 02:18:14.008000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:18:14.037789 systemd-logind[1288]: New session 19 of user core. May 10 02:18:14.038323 systemd[1]: Started session-19.scope. May 10 02:18:14.045000 audit[7135]: USER_START pid=7135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:14.051000 audit[7138]: CRED_ACQ pid=7138 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:14.062565 kernel: audit: type=1105 audit(1746843494.045:637): pid=7135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:14.062696 kernel: audit: type=1103 audit(1746843494.051:638): pid=7138 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:14.747671 sshd[7135]: pam_unix(sshd:session): session closed for user core May 10 02:18:14.748000 audit[7135]: USER_END pid=7135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:14.749000 audit[7135]: CRED_DISP pid=7135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:14.763129 systemd[1]: sshd@18-10.230.33.70:22-139.178.68.195:44058.service: Deactivated successfully. May 10 02:18:14.765164 systemd[1]: session-19.scope: Deactivated successfully. May 10 02:18:14.767365 kernel: audit: type=1106 audit(1746843494.748:639): pid=7135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:14.767554 kernel: audit: type=1104 audit(1746843494.749:640): pid=7135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:14.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.33.70:22-139.178.68.195:44058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:14.769057 systemd-logind[1288]: Session 19 logged out. Waiting for processes to exit. May 10 02:18:14.771815 systemd-logind[1288]: Removed session 19. May 10 02:18:14.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.33.70:22-139.178.68.195:44068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:14.893790 systemd[1]: Started sshd@19-10.230.33.70:22-139.178.68.195:44068.service. May 10 02:18:15.786000 audit[7150]: USER_ACCT pid=7150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:15.788728 sshd[7150]: Accepted publickey for core from 139.178.68.195 port 44068 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:18:15.787000 audit[7150]: CRED_ACQ pid=7150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:15.788000 audit[7150]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe88a68bc0 a2=3 a3=0 items=0 ppid=1 pid=7150 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:15.788000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:18:15.791047 sshd[7150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:18:15.798709 systemd-logind[1288]: New session 20 of user core. May 10 02:18:15.799658 systemd[1]: Started session-20.scope. May 10 02:18:15.809000 audit[7150]: USER_START pid=7150 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:15.812000 audit[7153]: CRED_ACQ pid=7153 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:16.945780 sshd[7150]: pam_unix(sshd:session): session closed for user core May 10 02:18:16.947000 audit[7150]: USER_END pid=7150 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:16.948000 audit[7150]: CRED_DISP pid=7150 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:16.952061 systemd-logind[1288]: Session 20 logged out. Waiting for processes to exit. May 10 02:18:16.952575 systemd[1]: sshd@19-10.230.33.70:22-139.178.68.195:44068.service: Deactivated successfully. May 10 02:18:16.954116 systemd[1]: session-20.scope: Deactivated successfully. May 10 02:18:16.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.33.70:22-139.178.68.195:44068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:16.955612 systemd-logind[1288]: Removed session 20. May 10 02:18:17.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.33.70:22-139.178.68.195:49320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:17.090579 systemd[1]: Started sshd@20-10.230.33.70:22-139.178.68.195:49320.service. May 10 02:18:18.016000 audit[7161]: USER_ACCT pid=7161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:18.020397 sshd[7161]: Accepted publickey for core from 139.178.68.195 port 49320 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:18:18.020901 sshd[7161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:18:18.018000 audit[7161]: CRED_ACQ pid=7161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:18.018000 audit[7161]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa483f650 a2=3 a3=0 items=0 ppid=1 pid=7161 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:18.018000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:18:18.028740 systemd-logind[1288]: New session 21 of user core. May 10 02:18:18.029617 systemd[1]: Started session-21.scope. May 10 02:18:18.045000 audit[7161]: USER_START pid=7161 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:18.048000 audit[7164]: CRED_ACQ pid=7164 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:18.672835 systemd[1]: run-containerd-runc-k8s.io-084543f0ca5a1043693087c72a8097c079136ce53312b00fcfe95383ad2dbfc4-runc.PjCU6E.mount: Deactivated successfully. May 10 02:18:21.552697 kernel: kauditd_printk_skb: 20 callbacks suppressed May 10 02:18:21.556561 kernel: audit: type=1325 audit(1746843501.545:657): table=filter:148 family=2 entries=20 op=nft_register_rule pid=7204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:18:21.557156 kernel: audit: type=1300 audit(1746843501.545:657): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc53861bf0 a2=0 a3=7ffc53861bdc items=0 ppid=2455 pid=7204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:21.545000 audit[7204]: NETFILTER_CFG table=filter:148 family=2 entries=20 op=nft_register_rule pid=7204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:18:21.545000 audit[7204]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc53861bf0 a2=0 a3=7ffc53861bdc items=0 ppid=2455 pid=7204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:21.564935 kernel: audit: type=1327 audit(1746843501.545:657): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:18:21.545000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:18:21.567000 audit[7204]: NETFILTER_CFG table=nat:149 family=2 entries=22 op=nft_register_rule pid=7204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:18:21.574663 kernel: audit: type=1325 audit(1746843501.567:658): table=nat:149 family=2 entries=22 op=nft_register_rule pid=7204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:18:21.567000 audit[7204]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc53861bf0 a2=0 a3=0 items=0 ppid=2455 pid=7204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:21.583674 kernel: audit: type=1300 audit(1746843501.567:658): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc53861bf0 a2=0 a3=0 items=0 ppid=2455 pid=7204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:21.567000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:18:21.588658 kernel: audit: type=1327 audit(1746843501.567:658): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:18:21.593162 sshd[7161]: pam_unix(sshd:session): session closed for user core May 10 02:18:21.600000 audit[7161]: USER_END pid=7161 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:21.612659 kernel: audit: type=1106 audit(1746843501.600:659): pid=7161 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:21.620313 systemd[1]: sshd@20-10.230.33.70:22-139.178.68.195:49320.service: Deactivated successfully. May 10 02:18:21.628734 kernel: audit: type=1104 audit(1746843501.600:660): pid=7161 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:21.600000 audit[7161]: CRED_DISP pid=7161 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:21.629182 systemd-logind[1288]: Session 21 logged out. Waiting for processes to exit. May 10 02:18:21.629355 systemd[1]: session-21.scope: Deactivated successfully. May 10 02:18:21.639984 kernel: audit: type=1325 audit(1746843501.617:661): table=filter:150 family=2 entries=32 op=nft_register_rule pid=7206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:18:21.617000 audit[7206]: NETFILTER_CFG table=filter:150 family=2 entries=32 op=nft_register_rule pid=7206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:18:21.640336 systemd-logind[1288]: Removed session 21. May 10 02:18:21.617000 audit[7206]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffd663d5890 a2=0 a3=7ffd663d587c items=0 ppid=2455 pid=7206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:21.651068 kernel: audit: type=1300 audit(1746843501.617:661): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffd663d5890 a2=0 a3=7ffd663d587c items=0 ppid=2455 pid=7206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:21.617000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:18:21.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.33.70:22-139.178.68.195:49320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:21.640000 audit[7206]: NETFILTER_CFG table=nat:151 family=2 entries=22 op=nft_register_rule pid=7206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:18:21.640000 audit[7206]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffd663d5890 a2=0 a3=0 items=0 ppid=2455 pid=7206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:21.640000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:18:21.704391 systemd[1]: Started sshd@21-10.230.33.70:22-139.178.68.195:49326.service. May 10 02:18:21.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.33.70:22-139.178.68.195:49326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:22.816000 audit[7209]: USER_ACCT pid=7209 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:22.818147 sshd[7209]: Accepted publickey for core from 139.178.68.195 port 49326 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:18:22.818000 audit[7209]: CRED_ACQ pid=7209 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:22.818000 audit[7209]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5a5c87a0 a2=3 a3=0 items=0 ppid=1 pid=7209 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:22.818000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:18:22.821559 sshd[7209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:18:22.831681 systemd[1]: Started session-22.scope. May 10 02:18:22.832773 systemd-logind[1288]: New session 22 of user core. May 10 02:18:22.845000 audit[7209]: USER_START pid=7209 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:22.849000 audit[7214]: CRED_ACQ pid=7214 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:24.791966 sshd[7209]: pam_unix(sshd:session): session closed for user core May 10 02:18:24.795000 audit[7209]: USER_END pid=7209 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:24.796000 audit[7209]: CRED_DISP pid=7209 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:24.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.33.70:22-139.178.68.195:49326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:24.798825 systemd[1]: sshd@21-10.230.33.70:22-139.178.68.195:49326.service: Deactivated successfully. May 10 02:18:24.808970 systemd[1]: session-22.scope: Deactivated successfully. May 10 02:18:24.814592 systemd-logind[1288]: Session 22 logged out. Waiting for processes to exit. May 10 02:18:24.819434 systemd-logind[1288]: Removed session 22. May 10 02:18:24.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.33.70:22-139.178.68.195:49328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:24.937099 systemd[1]: Started sshd@22-10.230.33.70:22-139.178.68.195:49328.service. May 10 02:18:25.876000 audit[7259]: USER_ACCT pid=7259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:25.878996 sshd[7259]: Accepted publickey for core from 139.178.68.195 port 49328 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:18:25.879000 audit[7259]: CRED_ACQ pid=7259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:25.880000 audit[7259]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee9946920 a2=3 a3=0 items=0 ppid=1 pid=7259 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:25.880000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:18:25.881764 sshd[7259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:18:25.891523 systemd[1]: Started session-23.scope. May 10 02:18:25.892780 systemd-logind[1288]: New session 23 of user core. May 10 02:18:25.908000 audit[7259]: USER_START pid=7259 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:25.911000 audit[7263]: CRED_ACQ pid=7263 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:26.788994 sshd[7259]: pam_unix(sshd:session): session closed for user core May 10 02:18:26.803773 kernel: kauditd_printk_skb: 24 callbacks suppressed May 10 02:18:26.805693 kernel: audit: type=1106 audit(1746843506.792:679): pid=7259 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:26.792000 audit[7259]: USER_END pid=7259 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:26.807609 systemd-logind[1288]: Session 23 logged out. Waiting for processes to exit. May 10 02:18:26.809167 systemd[1]: sshd@22-10.230.33.70:22-139.178.68.195:49328.service: Deactivated successfully. May 10 02:18:26.810468 systemd[1]: session-23.scope: Deactivated successfully. May 10 02:18:26.811869 systemd-logind[1288]: Removed session 23. May 10 02:18:26.803000 audit[7259]: CRED_DISP pid=7259 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:26.825671 kernel: audit: type=1104 audit(1746843506.803:680): pid=7259 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:26.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.33.70:22-139.178.68.195:49328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:26.838657 kernel: audit: type=1131 audit(1746843506.809:681): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.33.70:22-139.178.68.195:49328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:31.935241 systemd[1]: Started sshd@23-10.230.33.70:22-139.178.68.195:54222.service. May 10 02:18:31.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.33.70:22-139.178.68.195:54222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:31.947685 kernel: audit: type=1130 audit(1746843511.936:682): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.33.70:22-139.178.68.195:54222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:31.975000 audit[7275]: NETFILTER_CFG table=filter:152 family=2 entries=20 op=nft_register_rule pid=7275 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:18:31.981676 kernel: audit: type=1325 audit(1746843511.975:683): table=filter:152 family=2 entries=20 op=nft_register_rule pid=7275 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:18:31.975000 audit[7275]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffcd0abd0a0 a2=0 a3=7ffcd0abd08c items=0 ppid=2455 pid=7275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:31.991684 kernel: audit: type=1300 audit(1746843511.975:683): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffcd0abd0a0 a2=0 a3=7ffcd0abd08c items=0 ppid=2455 pid=7275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:31.975000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:18:31.995000 audit[7275]: NETFILTER_CFG table=nat:153 family=2 entries=106 op=nft_register_chain pid=7275 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:18:32.004670 kernel: audit: type=1327 audit(1746843511.975:683): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:18:32.004789 kernel: audit: type=1325 audit(1746843511.995:684): table=nat:153 family=2 entries=106 op=nft_register_chain pid=7275 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 10 02:18:32.004874 kernel: audit: type=1300 audit(1746843511.995:684): arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffcd0abd0a0 a2=0 a3=7ffcd0abd08c items=0 ppid=2455 pid=7275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:31.995000 audit[7275]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffcd0abd0a0 a2=0 a3=7ffcd0abd08c items=0 ppid=2455 pid=7275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:32.013023 kernel: audit: type=1327 audit(1746843511.995:684): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:18:31.995000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 10 02:18:32.878156 sshd[7273]: Accepted publickey for core from 139.178.68.195 port 54222 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:18:32.877000 audit[7273]: USER_ACCT pid=7273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:32.885000 audit[7273]: CRED_ACQ pid=7273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:32.887747 sshd[7273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:18:32.893118 kernel: audit: type=1101 audit(1746843512.877:685): pid=7273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:32.893241 kernel: audit: type=1103 audit(1746843512.885:686): pid=7273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:32.900487 kernel: audit: type=1006 audit(1746843512.885:687): pid=7273 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 May 10 02:18:32.885000 audit[7273]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed320c2c0 a2=3 a3=0 items=0 ppid=1 pid=7273 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:32.885000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:18:32.902169 systemd-logind[1288]: New session 24 of user core. May 10 02:18:32.903528 systemd[1]: Started session-24.scope. May 10 02:18:32.911000 audit[7273]: USER_START pid=7273 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:32.914000 audit[7279]: CRED_ACQ pid=7279 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:33.633397 sshd[7273]: pam_unix(sshd:session): session closed for user core May 10 02:18:33.635000 audit[7273]: USER_END pid=7273 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:33.635000 audit[7273]: CRED_DISP pid=7273 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:33.638031 systemd[1]: sshd@23-10.230.33.70:22-139.178.68.195:54222.service: Deactivated successfully. May 10 02:18:33.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.33.70:22-139.178.68.195:54222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:33.639799 systemd[1]: session-24.scope: Deactivated successfully. May 10 02:18:33.639860 systemd-logind[1288]: Session 24 logged out. Waiting for processes to exit. May 10 02:18:33.641548 systemd-logind[1288]: Removed session 24. May 10 02:18:38.779770 systemd[1]: Started sshd@24-10.230.33.70:22-139.178.68.195:45162.service. May 10 02:18:38.787194 kernel: kauditd_printk_skb: 7 callbacks suppressed May 10 02:18:38.787395 kernel: audit: type=1130 audit(1746843518.779:693): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.33.70:22-139.178.68.195:45162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:38.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.33.70:22-139.178.68.195:45162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:39.696000 audit[7289]: USER_ACCT pid=7289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:39.704485 sshd[7289]: Accepted publickey for core from 139.178.68.195 port 45162 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:18:39.706329 kernel: audit: type=1101 audit(1746843519.696:694): pid=7289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:39.706434 kernel: audit: type=1103 audit(1746843519.703:695): pid=7289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:39.703000 audit[7289]: CRED_ACQ pid=7289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:39.715419 kernel: audit: type=1006 audit(1746843519.703:696): pid=7289 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 May 10 02:18:39.715832 sshd[7289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:18:39.703000 audit[7289]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca7989600 a2=3 a3=0 items=0 ppid=1 pid=7289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:39.723759 kernel: audit: type=1300 audit(1746843519.703:696): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca7989600 a2=3 a3=0 items=0 ppid=1 pid=7289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:39.703000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:18:39.726659 kernel: audit: type=1327 audit(1746843519.703:696): proctitle=737368643A20636F7265205B707269765D May 10 02:18:39.732156 systemd-logind[1288]: New session 25 of user core. May 10 02:18:39.733192 systemd[1]: Started session-25.scope. May 10 02:18:39.743000 audit[7289]: USER_START pid=7289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:39.752705 kernel: audit: type=1105 audit(1746843519.743:697): pid=7289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:39.746000 audit[7292]: CRED_ACQ pid=7292 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:39.761691 kernel: audit: type=1103 audit(1746843519.746:698): pid=7292 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:40.424935 sshd[7289]: pam_unix(sshd:session): session closed for user core May 10 02:18:40.427000 audit[7289]: USER_END pid=7289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:40.441231 kernel: audit: type=1106 audit(1746843520.427:699): pid=7289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:40.441343 kernel: audit: type=1104 audit(1746843520.427:700): pid=7289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:40.427000 audit[7289]: CRED_DISP pid=7289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:40.435884 systemd-logind[1288]: Session 25 logged out. Waiting for processes to exit. May 10 02:18:40.437347 systemd[1]: sshd@24-10.230.33.70:22-139.178.68.195:45162.service: Deactivated successfully. May 10 02:18:40.439025 systemd[1]: session-25.scope: Deactivated successfully. May 10 02:18:40.440448 systemd-logind[1288]: Removed session 25. May 10 02:18:40.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.33.70:22-139.178.68.195:45162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:45.575603 systemd[1]: Started sshd@25-10.230.33.70:22-139.178.68.195:38888.service. May 10 02:18:45.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.33.70:22-139.178.68.195:38888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:45.582423 kernel: kauditd_printk_skb: 1 callbacks suppressed May 10 02:18:45.589891 kernel: audit: type=1130 audit(1746843525.575:702): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.33.70:22-139.178.68.195:38888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 02:18:46.487007 sshd[7305]: Accepted publickey for core from 139.178.68.195 port 38888 ssh2: RSA SHA256:WN4f51QI5pkflGflnLefC3FAKa0BnDYOIe8vab4uHa0 May 10 02:18:46.485000 audit[7305]: USER_ACCT pid=7305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:46.498665 kernel: audit: type=1101 audit(1746843526.485:703): pid=7305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:46.497000 audit[7305]: CRED_ACQ pid=7305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:46.499799 sshd[7305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 02:18:46.510186 kernel: audit: type=1103 audit(1746843526.497:704): pid=7305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:46.510390 kernel: audit: type=1006 audit(1746843526.497:705): pid=7305 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 May 10 02:18:46.510805 kernel: audit: type=1300 audit(1746843526.497:705): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef9553680 a2=3 a3=0 items=0 ppid=1 pid=7305 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:46.497000 audit[7305]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef9553680 a2=3 a3=0 items=0 ppid=1 pid=7305 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 02:18:46.523386 systemd[1]: Started session-26.scope. May 10 02:18:46.524917 systemd-logind[1288]: New session 26 of user core. May 10 02:18:46.497000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 10 02:18:46.538867 kernel: audit: type=1327 audit(1746843526.497:705): proctitle=737368643A20636F7265205B707269765D May 10 02:18:46.541950 kernel: audit: type=1105 audit(1746843526.537:706): pid=7305 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:46.537000 audit[7305]: USER_START pid=7305 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:46.548000 audit[7308]: CRED_ACQ pid=7308 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:46.557696 kernel: audit: type=1103 audit(1746843526.548:707): pid=7308 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:47.266363 sshd[7305]: pam_unix(sshd:session): session closed for user core May 10 02:18:47.267000 audit[7305]: USER_END pid=7305 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:47.273661 systemd[1]: sshd@25-10.230.33.70:22-139.178.68.195:38888.service: Deactivated successfully. May 10 02:18:47.281224 kernel: audit: type=1106 audit(1746843527.267:708): pid=7305 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:47.281686 kernel: audit: type=1104 audit(1746843527.269:709): pid=7305 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:47.269000 audit[7305]: CRED_DISP pid=7305 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' May 10 02:18:47.275313 systemd[1]: session-26.scope: Deactivated successfully. May 10 02:18:47.277723 systemd-logind[1288]: Session 26 logged out. Waiting for processes to exit. May 10 02:18:47.280534 systemd-logind[1288]: Removed session 26. May 10 02:18:47.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.33.70:22-139.178.68.195:38888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'