Dec 13 03:43:20.044425 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024
Dec 13 03:43:20.044474 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c
Dec 13 03:43:20.044503 kernel: BIOS-provided physical RAM map:
Dec 13 03:43:20.044622 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 13 03:43:20.044640 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 13 03:43:20.044657 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 13 03:43:20.044677 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable
Dec 13 03:43:20.044694 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved
Dec 13 03:43:20.044715 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 13 03:43:20.044731 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 13 03:43:20.044748 kernel: NX (Execute Disable) protection: active
Dec 13 03:43:20.044764 kernel: SMBIOS 2.8 present.
Dec 13 03:43:20.044781 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 13 03:43:20.044797 kernel: Hypervisor detected: KVM
Dec 13 03:43:20.044817 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 13 03:43:20.044838 kernel: kvm-clock: cpu 0, msr 3c19b001, primary cpu clock
Dec 13 03:43:20.044856 kernel: kvm-clock: using sched offset of 5571540260 cycles
Dec 13 03:43:20.044875 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 13 03:43:20.044894 kernel: tsc: Detected 1996.249 MHz processor
Dec 13 03:43:20.044912 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 13 03:43:20.044931 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 13 03:43:20.044950 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000
Dec 13 03:43:20.044968 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 13 03:43:20.044991 kernel: ACPI: Early table checksum verification disabled
Dec 13 03:43:20.045009 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS )
Dec 13 03:43:20.045027 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 03:43:20.045046 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 03:43:20.045064 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 03:43:20.045082 kernel: ACPI: FACS 0x000000007FFE0000 000040
Dec 13 03:43:20.045121 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 03:43:20.045140 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 03:43:20.045158 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f]
Dec 13 03:43:20.045180 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b]
Dec 13 03:43:20.045199 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f]
Dec 13 03:43:20.045216 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f]
Dec 13 03:43:20.045234 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847]
Dec 13 03:43:20.045252 kernel: No NUMA configuration found
Dec 13 03:43:20.045270 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff]
Dec 13 03:43:20.045288 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff]
Dec 13 03:43:20.045306 kernel: Zone ranges:
Dec 13 03:43:20.045335 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 13 03:43:20.045354 kernel:   DMA32    [mem 0x0000000001000000-0x000000007ffdcfff]
Dec 13 03:43:20.045372 kernel:   Normal   empty
Dec 13 03:43:20.045391 kernel: Movable zone start for each node
Dec 13 03:43:20.045410 kernel: Early memory node ranges
Dec 13 03:43:20.045428 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 13 03:43:20.045450 kernel:   node   0: [mem 0x0000000000100000-0x000000007ffdcfff]
Dec 13 03:43:20.045469 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff]
Dec 13 03:43:20.045488 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 13 03:43:20.045506 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 13 03:43:20.047591 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges
Dec 13 03:43:20.047609 kernel: ACPI: PM-Timer IO Port: 0x608
Dec 13 03:43:20.047623 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 13 03:43:20.047638 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 13 03:43:20.047653 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 13 03:43:20.047667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 13 03:43:20.047689 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 13 03:43:20.047703 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 13 03:43:20.047717 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 13 03:43:20.047731 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 13 03:43:20.047745 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Dec 13 03:43:20.047760 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices
Dec 13 03:43:20.047773 kernel: Booting paravirtualized kernel on KVM
Dec 13 03:43:20.047788 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 13 03:43:20.047802 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1
Dec 13 03:43:20.047821 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576
Dec 13 03:43:20.047835 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152
Dec 13 03:43:20.047849 kernel: pcpu-alloc: [0] 0 1 
Dec 13 03:43:20.047863 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0
Dec 13 03:43:20.047877 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 13 03:43:20.047891 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 515805
Dec 13 03:43:20.047906 kernel: Policy zone: DMA32
Dec 13 03:43:20.047923 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c
Dec 13 03:43:20.047941 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Dec 13 03:43:20.047955 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Dec 13 03:43:20.047970 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 13 03:43:20.047984 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 13 03:43:20.047999 kernel: Memory: 1973284K/2096620K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 123076K reserved, 0K cma-reserved)
Dec 13 03:43:20.048014 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Dec 13 03:43:20.048028 kernel: ftrace: allocating 34549 entries in 135 pages
Dec 13 03:43:20.048042 kernel: ftrace: allocated 135 pages with 4 groups
Dec 13 03:43:20.048058 kernel: rcu: Hierarchical RCU implementation.
Dec 13 03:43:20.048073 kernel: rcu:         RCU event tracing is enabled.
Dec 13 03:43:20.048088 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Dec 13 03:43:20.048102 kernel:         Rude variant of Tasks RCU enabled.
Dec 13 03:43:20.048117 kernel:         Tracing variant of Tasks RCU enabled.
Dec 13 03:43:20.048131 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 13 03:43:20.048146 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Dec 13 03:43:20.048160 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Dec 13 03:43:20.048173 kernel: Console: colour VGA+ 80x25
Dec 13 03:43:20.048190 kernel: printk: console [tty0] enabled
Dec 13 03:43:20.048204 kernel: printk: console [ttyS0] enabled
Dec 13 03:43:20.048219 kernel: ACPI: Core revision 20210730
Dec 13 03:43:20.048233 kernel: APIC: Switch to symmetric I/O mode setup
Dec 13 03:43:20.048247 kernel: x2apic enabled
Dec 13 03:43:20.048261 kernel: Switched APIC routing to physical x2apic.
Dec 13 03:43:20.048275 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Dec 13 03:43:20.048290 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 13 03:43:20.048304 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249)
Dec 13 03:43:20.048319 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
Dec 13 03:43:20.048335 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
Dec 13 03:43:20.048350 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 13 03:43:20.048364 kernel: Spectre V2 : Mitigation: Retpolines
Dec 13 03:43:20.048379 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Dec 13 03:43:20.048393 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Dec 13 03:43:20.048407 kernel: Speculative Store Bypass: Vulnerable
Dec 13 03:43:20.048421 kernel: x86/fpu: x87 FPU will use FXSAVE
Dec 13 03:43:20.048436 kernel: Freeing SMP alternatives memory: 32K
Dec 13 03:43:20.048450 kernel: pid_max: default: 32768 minimum: 301
Dec 13 03:43:20.048468 kernel: LSM: Security Framework initializing
Dec 13 03:43:20.048482 kernel: SELinux:  Initializing.
Dec 13 03:43:20.048496 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Dec 13 03:43:20.048511 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Dec 13 03:43:20.048548 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3)
Dec 13 03:43:20.048563 kernel: Performance Events: AMD PMU driver.
Dec 13 03:43:20.048577 kernel: ... version:                0
Dec 13 03:43:20.048591 kernel: ... bit width:              48
Dec 13 03:43:20.048605 kernel: ... generic registers:      4
Dec 13 03:43:20.048634 kernel: ... value mask:             0000ffffffffffff
Dec 13 03:43:20.048649 kernel: ... max period:             00007fffffffffff
Dec 13 03:43:20.048664 kernel: ... fixed-purpose events:   0
Dec 13 03:43:20.048682 kernel: ... event mask:             000000000000000f
Dec 13 03:43:20.048696 kernel: signal: max sigframe size: 1440
Dec 13 03:43:20.048711 kernel: rcu: Hierarchical SRCU implementation.
Dec 13 03:43:20.048725 kernel: smp: Bringing up secondary CPUs ...
Dec 13 03:43:20.048740 kernel: x86: Booting SMP configuration:
Dec 13 03:43:20.048759 kernel: .... node  #0, CPUs:      #1
Dec 13 03:43:20.048774 kernel: kvm-clock: cpu 1, msr 3c19b041, secondary cpu clock
Dec 13 03:43:20.048788 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0
Dec 13 03:43:20.048803 kernel: smp: Brought up 1 node, 2 CPUs
Dec 13 03:43:20.048818 kernel: smpboot: Max logical packages: 2
Dec 13 03:43:20.048832 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS)
Dec 13 03:43:20.048847 kernel: devtmpfs: initialized
Dec 13 03:43:20.048861 kernel: x86/mm: Memory block size: 128MB
Dec 13 03:43:20.048876 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 13 03:43:20.048894 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Dec 13 03:43:20.048909 kernel: pinctrl core: initialized pinctrl subsystem
Dec 13 03:43:20.048924 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 13 03:43:20.048939 kernel: audit: initializing netlink subsys (disabled)
Dec 13 03:43:20.048953 kernel: audit: type=2000 audit(1734061400.120:1): state=initialized audit_enabled=0 res=1
Dec 13 03:43:20.048968 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 13 03:43:20.048983 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 13 03:43:20.048998 kernel: cpuidle: using governor menu
Dec 13 03:43:20.049012 kernel: ACPI: bus type PCI registered
Dec 13 03:43:20.049030 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 13 03:43:20.049045 kernel: dca service started, version 1.12.1
Dec 13 03:43:20.049060 kernel: PCI: Using configuration type 1 for base access
Dec 13 03:43:20.049075 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 13 03:43:20.049089 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Dec 13 03:43:20.049122 kernel: ACPI: Added _OSI(Module Device)
Dec 13 03:43:20.049137 kernel: ACPI: Added _OSI(Processor Device)
Dec 13 03:43:20.049153 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 13 03:43:20.049167 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 13 03:43:20.049185 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Dec 13 03:43:20.049200 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Dec 13 03:43:20.049214 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Dec 13 03:43:20.049229 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 13 03:43:20.049244 kernel: ACPI: Interpreter enabled
Dec 13 03:43:20.049258 kernel: ACPI: PM: (supports S0 S3 S5)
Dec 13 03:43:20.049273 kernel: ACPI: Using IOAPIC for interrupt routing
Dec 13 03:43:20.049288 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 13 03:43:20.049303 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 13 03:43:20.049322 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 13 03:43:20.049614 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Dec 13 03:43:20.049796 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
Dec 13 03:43:20.049822 kernel: acpiphp: Slot [3] registered
Dec 13 03:43:20.049837 kernel: acpiphp: Slot [4] registered
Dec 13 03:43:20.049852 kernel: acpiphp: Slot [5] registered
Dec 13 03:43:20.049867 kernel: acpiphp: Slot [6] registered
Dec 13 03:43:20.049887 kernel: acpiphp: Slot [7] registered
Dec 13 03:43:20.049902 kernel: acpiphp: Slot [8] registered
Dec 13 03:43:20.049917 kernel: acpiphp: Slot [9] registered
Dec 13 03:43:20.049931 kernel: acpiphp: Slot [10] registered
Dec 13 03:43:20.049946 kernel: acpiphp: Slot [11] registered
Dec 13 03:43:20.049961 kernel: acpiphp: Slot [12] registered
Dec 13 03:43:20.049975 kernel: acpiphp: Slot [13] registered
Dec 13 03:43:20.049988 kernel: acpiphp: Slot [14] registered
Dec 13 03:43:20.049997 kernel: acpiphp: Slot [15] registered
Dec 13 03:43:20.050007 kernel: acpiphp: Slot [16] registered
Dec 13 03:43:20.050019 kernel: acpiphp: Slot [17] registered
Dec 13 03:43:20.050028 kernel: acpiphp: Slot [18] registered
Dec 13 03:43:20.050038 kernel: acpiphp: Slot [19] registered
Dec 13 03:43:20.050047 kernel: acpiphp: Slot [20] registered
Dec 13 03:43:20.050055 kernel: acpiphp: Slot [21] registered
Dec 13 03:43:20.050063 kernel: acpiphp: Slot [22] registered
Dec 13 03:43:20.050070 kernel: acpiphp: Slot [23] registered
Dec 13 03:43:20.050078 kernel: acpiphp: Slot [24] registered
Dec 13 03:43:20.050086 kernel: acpiphp: Slot [25] registered
Dec 13 03:43:20.050095 kernel: acpiphp: Slot [26] registered
Dec 13 03:43:20.050103 kernel: acpiphp: Slot [27] registered
Dec 13 03:43:20.050111 kernel: acpiphp: Slot [28] registered
Dec 13 03:43:20.050118 kernel: acpiphp: Slot [29] registered
Dec 13 03:43:20.050126 kernel: acpiphp: Slot [30] registered
Dec 13 03:43:20.050134 kernel: acpiphp: Slot [31] registered
Dec 13 03:43:20.050142 kernel: PCI host bridge to bus 0000:00
Dec 13 03:43:20.050241 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 13 03:43:20.050317 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 13 03:43:20.050394 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 13 03:43:20.050467 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
Dec 13 03:43:20.050557 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window]
Dec 13 03:43:20.050631 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 13 03:43:20.050729 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Dec 13 03:43:20.050820 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Dec 13 03:43:20.050924 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Dec 13 03:43:20.051009 kernel: pci 0000:00:01.1: reg 0x20: [io  0xc120-0xc12f]
Dec 13 03:43:20.051091 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Dec 13 03:43:20.051173 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Dec 13 03:43:20.051253 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Dec 13 03:43:20.051336 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Dec 13 03:43:20.051425 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Dec 13 03:43:20.054548 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 13 03:43:20.054662 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 13 03:43:20.054784 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000
Dec 13 03:43:20.054885 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref]
Dec 13 03:43:20.054978 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 13 03:43:20.055067 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff]
Dec 13 03:43:20.055161 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref]
Dec 13 03:43:20.055249 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 13 03:43:20.055351 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Dec 13 03:43:20.055441 kernel: pci 0000:00:03.0: reg 0x10: [io  0xc080-0xc0bf]
Dec 13 03:43:20.055551 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff]
Dec 13 03:43:20.055643 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 13 03:43:20.055732 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref]
Dec 13 03:43:20.055840 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000
Dec 13 03:43:20.055932 kernel: pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc07f]
Dec 13 03:43:20.056020 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff]
Dec 13 03:43:20.056107 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 13 03:43:20.056203 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00
Dec 13 03:43:20.056293 kernel: pci 0000:00:05.0: reg 0x10: [io  0xc0c0-0xc0ff]
Dec 13 03:43:20.056380 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 13 03:43:20.056479 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00
Dec 13 03:43:20.058616 kernel: pci 0000:00:06.0: reg 0x10: [io  0xc100-0xc11f]
Dec 13 03:43:20.058710 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 13 03:43:20.058724 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 13 03:43:20.058733 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 13 03:43:20.058742 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 13 03:43:20.058750 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 13 03:43:20.058759 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 13 03:43:20.058771 kernel: iommu: Default domain type: Translated 
Dec 13 03:43:20.058780 kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Dec 13 03:43:20.058867 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 13 03:43:20.058954 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 13 03:43:20.059040 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 13 03:43:20.059053 kernel: vgaarb: loaded
Dec 13 03:43:20.059061 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 13 03:43:20.059070 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 13 03:43:20.059079 kernel: PTP clock support registered
Dec 13 03:43:20.059092 kernel: PCI: Using ACPI for IRQ routing
Dec 13 03:43:20.059100 kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 13 03:43:20.059109 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 13 03:43:20.059117 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff]
Dec 13 03:43:20.059126 kernel: clocksource: Switched to clocksource kvm-clock
Dec 13 03:43:20.059134 kernel: VFS: Disk quotas dquot_6.6.0
Dec 13 03:43:20.059143 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 13 03:43:20.059151 kernel: pnp: PnP ACPI init
Dec 13 03:43:20.059251 kernel: pnp 00:03: [dma 2]
Dec 13 03:43:20.059269 kernel: pnp: PnP ACPI: found 5 devices
Dec 13 03:43:20.059278 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 13 03:43:20.059286 kernel: NET: Registered PF_INET protocol family
Dec 13 03:43:20.059295 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
Dec 13 03:43:20.059304 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
Dec 13 03:43:20.059312 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 13 03:43:20.059321 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 03:43:20.059330 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
Dec 13 03:43:20.059341 kernel: TCP: Hash tables configured (established 16384 bind 16384)
Dec 13 03:43:20.059349 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
Dec 13 03:43:20.059358 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
Dec 13 03:43:20.059367 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 13 03:43:20.059375 kernel: NET: Registered PF_XDP protocol family
Dec 13 03:43:20.059458 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 13 03:43:20.060649 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 13 03:43:20.060745 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 13 03:43:20.060822 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window]
Dec 13 03:43:20.060919 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window]
Dec 13 03:43:20.061031 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 13 03:43:20.061128 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 13 03:43:20.061226 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds
Dec 13 03:43:20.061240 kernel: PCI: CLS 0 bytes, default 64
Dec 13 03:43:20.061249 kernel: Initialise system trusted keyrings
Dec 13 03:43:20.061258 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0
Dec 13 03:43:20.061266 kernel: Key type asymmetric registered
Dec 13 03:43:20.061280 kernel: Asymmetric key parser 'x509' registered
Dec 13 03:43:20.061289 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Dec 13 03:43:20.061297 kernel: io scheduler mq-deadline registered
Dec 13 03:43:20.061306 kernel: io scheduler kyber registered
Dec 13 03:43:20.061314 kernel: io scheduler bfq registered
Dec 13 03:43:20.061323 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Dec 13 03:43:20.061332 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 13 03:43:20.061340 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 13 03:43:20.061349 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 13 03:43:20.061361 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 13 03:43:20.061370 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 13 03:43:20.061379 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 13 03:43:20.061387 kernel: random: crng init done
Dec 13 03:43:20.061396 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 13 03:43:20.061405 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 13 03:43:20.061413 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 13 03:43:20.061560 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 13 03:43:20.061580 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Dec 13 03:43:20.061661 kernel: rtc_cmos 00:04: registered as rtc0
Dec 13 03:43:20.061740 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T03:43:19 UTC (1734061399)
Dec 13 03:43:20.061819 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 13 03:43:20.061831 kernel: NET: Registered PF_INET6 protocol family
Dec 13 03:43:20.061840 kernel: Segment Routing with IPv6
Dec 13 03:43:20.061848 kernel: In-situ OAM (IOAM) with IPv6
Dec 13 03:43:20.061857 kernel: NET: Registered PF_PACKET protocol family
Dec 13 03:43:20.061865 kernel: Key type dns_resolver registered
Dec 13 03:43:20.061878 kernel: IPI shorthand broadcast: enabled
Dec 13 03:43:20.061886 kernel: sched_clock: Marking stable (736027901, 123811350)->(876904574, -17065323)
Dec 13 03:43:20.061895 kernel: registered taskstats version 1
Dec 13 03:43:20.061903 kernel: Loading compiled-in X.509 certificates
Dec 13 03:43:20.061912 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e'
Dec 13 03:43:20.061920 kernel: Key type .fscrypt registered
Dec 13 03:43:20.061929 kernel: Key type fscrypt-provisioning registered
Dec 13 03:43:20.061938 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 13 03:43:20.061948 kernel: ima: Allocated hash algorithm: sha1
Dec 13 03:43:20.061957 kernel: ima: No architecture policies found
Dec 13 03:43:20.061965 kernel: clk: Disabling unused clocks
Dec 13 03:43:20.061973 kernel: Freeing unused kernel image (initmem) memory: 47476K
Dec 13 03:43:20.061982 kernel: Write protecting the kernel read-only data: 28672k
Dec 13 03:43:20.061991 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Dec 13 03:43:20.061999 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K
Dec 13 03:43:20.062008 kernel: Run /init as init process
Dec 13 03:43:20.062017 kernel:   with arguments:
Dec 13 03:43:20.062025 kernel:     /init
Dec 13 03:43:20.062035 kernel:   with environment:
Dec 13 03:43:20.062043 kernel:     HOME=/
Dec 13 03:43:20.062051 kernel:     TERM=linux
Dec 13 03:43:20.062060 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Dec 13 03:43:20.062071 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 03:43:20.062082 systemd[1]: Detected virtualization kvm.
Dec 13 03:43:20.062092 systemd[1]: Detected architecture x86-64.
Dec 13 03:43:20.062103 systemd[1]: Running in initrd.
Dec 13 03:43:20.062112 systemd[1]: No hostname configured, using default hostname.
Dec 13 03:43:20.062121 systemd[1]: Hostname set to <localhost>.
Dec 13 03:43:20.062131 systemd[1]: Initializing machine ID from VM UUID.
Dec 13 03:43:20.062140 systemd[1]: Queued start job for default target initrd.target.
Dec 13 03:43:20.062149 systemd[1]: Started systemd-ask-password-console.path.
Dec 13 03:43:20.062158 systemd[1]: Reached target cryptsetup.target.
Dec 13 03:43:20.062167 systemd[1]: Reached target paths.target.
Dec 13 03:43:20.062177 systemd[1]: Reached target slices.target.
Dec 13 03:43:20.062186 systemd[1]: Reached target swap.target.
Dec 13 03:43:20.062195 systemd[1]: Reached target timers.target.
Dec 13 03:43:20.062205 systemd[1]: Listening on iscsid.socket.
Dec 13 03:43:20.062214 systemd[1]: Listening on iscsiuio.socket.
Dec 13 03:43:20.062223 systemd[1]: Listening on systemd-journald-audit.socket.
Dec 13 03:43:20.062232 systemd[1]: Listening on systemd-journald-dev-log.socket.
Dec 13 03:43:20.062241 systemd[1]: Listening on systemd-journald.socket.
Dec 13 03:43:20.062252 systemd[1]: Listening on systemd-networkd.socket.
Dec 13 03:43:20.062261 systemd[1]: Listening on systemd-udevd-control.socket.
Dec 13 03:43:20.062270 systemd[1]: Listening on systemd-udevd-kernel.socket.
Dec 13 03:43:20.062280 systemd[1]: Reached target sockets.target.
Dec 13 03:43:20.062301 systemd[1]: Starting kmod-static-nodes.service...
Dec 13 03:43:20.062313 systemd[1]: Finished network-cleanup.service.
Dec 13 03:43:20.062325 systemd[1]: Starting systemd-fsck-usr.service...
Dec 13 03:43:20.062335 systemd[1]: Starting systemd-journald.service...
Dec 13 03:43:20.062344 systemd[1]: Starting systemd-modules-load.service...
Dec 13 03:43:20.062354 systemd[1]: Starting systemd-resolved.service...
Dec 13 03:43:20.062363 systemd[1]: Starting systemd-vconsole-setup.service...
Dec 13 03:43:20.062373 systemd[1]: Finished kmod-static-nodes.service.
Dec 13 03:43:20.062382 systemd[1]: Finished systemd-fsck-usr.service.
Dec 13 03:43:20.062396 systemd-journald[185]: Journal started
Dec 13 03:43:20.062452 systemd-journald[185]: Runtime Journal (/run/log/journal/a7ae3c29f55640da83fde30ebb3bfc83) is 4.9M, max 39.5M, 34.5M free.
Dec 13 03:43:20.025910 systemd-modules-load[186]: Inserted module 'overlay'
Dec 13 03:43:20.101255 systemd[1]: Started systemd-journald.service.
Dec 13 03:43:20.101307 kernel: audit: type=1130 audit(1734061400.079:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.101323 kernel: audit: type=1130 audit(1734061400.080:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.101336 kernel: audit: type=1130 audit(1734061400.081:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.101348 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 13 03:43:20.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.075061 systemd-resolved[187]: Positive Trust Anchors:
Dec 13 03:43:20.075072 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 03:43:20.104315 kernel: Bridge firewalling registered
Dec 13 03:43:20.075111 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Dec 13 03:43:20.111649 kernel: audit: type=1130 audit(1734061400.104:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.077935 systemd-resolved[187]: Defaulting to hostname 'linux'.
Dec 13 03:43:20.081250 systemd[1]: Started systemd-resolved.service.
Dec 13 03:43:20.119783 kernel: audit: type=1130 audit(1734061400.111:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.081718 systemd[1]: Reached target nss-lookup.target.
Dec 13 03:43:20.086008 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Dec 13 03:43:20.102607 systemd[1]: Finished systemd-vconsole-setup.service.
Dec 13 03:43:20.104077 systemd-modules-load[186]: Inserted module 'br_netfilter'
Dec 13 03:43:20.104959 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Dec 13 03:43:20.113711 systemd[1]: Starting dracut-cmdline-ask.service...
Dec 13 03:43:20.134567 kernel: SCSI subsystem initialized
Dec 13 03:43:20.137193 systemd[1]: Finished dracut-cmdline-ask.service.
Dec 13 03:43:20.138851 systemd[1]: Starting dracut-cmdline.service...
Dec 13 03:43:20.144669 kernel: audit: type=1130 audit(1734061400.137:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.150656 dracut-cmdline[202]: dracut-dracut-053
Dec 13 03:43:20.153290 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c
Dec 13 03:43:20.161440 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 13 03:43:20.161472 kernel: device-mapper: uevent: version 1.0.3
Dec 13 03:43:20.161485 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Dec 13 03:43:20.160444 systemd-modules-load[186]: Inserted module 'dm_multipath'
Dec 13 03:43:20.161710 systemd[1]: Finished systemd-modules-load.service.
Dec 13 03:43:20.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.163699 systemd[1]: Starting systemd-sysctl.service...
Dec 13 03:43:20.173542 kernel: audit: type=1130 audit(1734061400.162:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.177684 systemd[1]: Finished systemd-sysctl.service.
Dec 13 03:43:20.181926 kernel: audit: type=1130 audit(1734061400.177:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.250548 kernel: Loading iSCSI transport class v2.0-870.
Dec 13 03:43:20.271555 kernel: iscsi: registered transport (tcp)
Dec 13 03:43:20.299110 kernel: iscsi: registered transport (qla4xxx)
Dec 13 03:43:20.299186 kernel: QLogic iSCSI HBA Driver
Dec 13 03:43:20.352708 systemd[1]: Finished dracut-cmdline.service.
Dec 13 03:43:20.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.363556 kernel: audit: type=1130 audit(1734061400.353:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.363823 systemd[1]: Starting dracut-pre-udev.service...
Dec 13 03:43:20.449577 kernel: raid6: sse2x4   gen() 12414 MB/s
Dec 13 03:43:20.466606 kernel: raid6: sse2x4   xor()  5029 MB/s
Dec 13 03:43:20.483609 kernel: raid6: sse2x2   gen() 14210 MB/s
Dec 13 03:43:20.500606 kernel: raid6: sse2x2   xor()  8813 MB/s
Dec 13 03:43:20.517610 kernel: raid6: sse2x1   gen() 10795 MB/s
Dec 13 03:43:20.535404 kernel: raid6: sse2x1   xor()  7000 MB/s
Dec 13 03:43:20.535461 kernel: raid6: using algorithm sse2x2 gen() 14210 MB/s
Dec 13 03:43:20.535489 kernel: raid6: .... xor() 8813 MB/s, rmw enabled
Dec 13 03:43:20.536285 kernel: raid6: using ssse3x2 recovery algorithm
Dec 13 03:43:20.550611 kernel: xor: measuring software checksum speed
Dec 13 03:43:20.550669 kernel:    prefetch64-sse  : 16445 MB/sec
Dec 13 03:43:20.553609 kernel:    generic_sse     : 15276 MB/sec
Dec 13 03:43:20.553667 kernel: xor: using function: prefetch64-sse (16445 MB/sec)
Dec 13 03:43:20.666578 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no
Dec 13 03:43:20.683278 systemd[1]: Finished dracut-pre-udev.service.
Dec 13 03:43:20.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.685000 audit: BPF prog-id=7 op=LOAD
Dec 13 03:43:20.685000 audit: BPF prog-id=8 op=LOAD
Dec 13 03:43:20.687742 systemd[1]: Starting systemd-udevd.service...
Dec 13 03:43:20.701912 systemd-udevd[385]: Using default interface naming scheme 'v252'.
Dec 13 03:43:20.707376 systemd[1]: Started systemd-udevd.service.
Dec 13 03:43:20.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.713677 systemd[1]: Starting dracut-pre-trigger.service...
Dec 13 03:43:20.731236 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation
Dec 13 03:43:20.792104 systemd[1]: Finished dracut-pre-trigger.service.
Dec 13 03:43:20.793573 systemd[1]: Starting systemd-udev-trigger.service...
Dec 13 03:43:20.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.833940 systemd[1]: Finished systemd-udev-trigger.service.
Dec 13 03:43:20.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:20.909688 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB)
Dec 13 03:43:20.937891 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Dec 13 03:43:20.937920 kernel: GPT:17805311 != 41943039
Dec 13 03:43:20.937933 kernel: GPT:Alternate GPT header not at the end of the disk.
Dec 13 03:43:20.937945 kernel: GPT:17805311 != 41943039
Dec 13 03:43:20.937958 kernel: GPT: Use GNU Parted to correct GPT errors.
Dec 13 03:43:20.937970 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Dec 13 03:43:20.944564 kernel: libata version 3.00 loaded.
Dec 13 03:43:20.957804 kernel: ata_piix 0000:00:01.1: version 2.13
Dec 13 03:43:20.988800 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (439)
Dec 13 03:43:20.988820 kernel: scsi host0: ata_piix
Dec 13 03:43:20.988952 kernel: scsi host1: ata_piix
Dec 13 03:43:20.989062 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14
Dec 13 03:43:20.989075 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15
Dec 13 03:43:20.983714 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Dec 13 03:43:21.024992 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Dec 13 03:43:21.029129 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Dec 13 03:43:21.029725 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Dec 13 03:43:21.039839 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Dec 13 03:43:21.044306 systemd[1]: Starting disk-uuid.service...
Dec 13 03:43:21.058925 disk-uuid[461]: Primary Header is updated.
Dec 13 03:43:21.058925 disk-uuid[461]: Secondary Entries is updated.
Dec 13 03:43:21.058925 disk-uuid[461]: Secondary Header is updated.
Dec 13 03:43:21.070565 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Dec 13 03:43:21.078470 kernel: GPT:disk_guids don't match.
Dec 13 03:43:21.078497 kernel: GPT: Use GNU Parted to correct GPT errors.
Dec 13 03:43:21.078509 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Dec 13 03:43:22.090560 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Dec 13 03:43:22.090916 disk-uuid[462]: The operation has completed successfully.
Dec 13 03:43:22.175762 systemd[1]: disk-uuid.service: Deactivated successfully.
Dec 13 03:43:22.176911 systemd[1]: Finished disk-uuid.service.
Dec 13 03:43:22.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:22.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:22.204842 systemd[1]: Starting verity-setup.service...
Dec 13 03:43:22.241784 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3"
Dec 13 03:43:22.355885 systemd[1]: Found device dev-mapper-usr.device.
Dec 13 03:43:22.362412 systemd[1]: Mounting sysusr-usr.mount...
Dec 13 03:43:22.366987 systemd[1]: Finished verity-setup.service.
Dec 13 03:43:22.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:22.511640 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Dec 13 03:43:22.513180 systemd[1]: Mounted sysusr-usr.mount.
Dec 13 03:43:22.515910 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met.
Dec 13 03:43:22.517566 systemd[1]: Starting ignition-setup.service...
Dec 13 03:43:22.522206 systemd[1]: Starting parse-ip-for-networkd.service...
Dec 13 03:43:22.533685 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 03:43:22.533757 kernel: BTRFS info (device vda6): using free space tree
Dec 13 03:43:22.533793 kernel: BTRFS info (device vda6): has skinny extents
Dec 13 03:43:22.553190 systemd[1]: mnt-oem.mount: Deactivated successfully.
Dec 13 03:43:22.574541 systemd[1]: Finished ignition-setup.service.
Dec 13 03:43:22.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:22.576054 systemd[1]: Starting ignition-fetch-offline.service...
Dec 13 03:43:22.666306 systemd[1]: Finished parse-ip-for-networkd.service.
Dec 13 03:43:22.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:22.667000 audit: BPF prog-id=9 op=LOAD
Dec 13 03:43:22.668997 systemd[1]: Starting systemd-networkd.service...
Dec 13 03:43:22.708960 systemd-networkd[632]: lo: Link UP
Dec 13 03:43:22.709885 systemd-networkd[632]: lo: Gained carrier
Dec 13 03:43:22.711286 systemd-networkd[632]: Enumeration completed
Dec 13 03:43:22.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:22.711928 systemd[1]: Started systemd-networkd.service.
Dec 13 03:43:22.712465 systemd[1]: Reached target network.target.
Dec 13 03:43:22.714816 systemd[1]: Starting iscsiuio.service...
Dec 13 03:43:22.716924 systemd-networkd[632]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 03:43:22.719269 systemd-networkd[632]: eth0: Link UP
Dec 13 03:43:22.719275 systemd-networkd[632]: eth0: Gained carrier
Dec 13 03:43:22.725587 systemd[1]: Started iscsiuio.service.
Dec 13 03:43:22.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:22.727201 systemd[1]: Starting iscsid.service...
Dec 13 03:43:22.735261 iscsid[641]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Dec 13 03:43:22.735261 iscsid[641]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Dec 13 03:43:22.735261 iscsid[641]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Dec 13 03:43:22.735261 iscsid[641]: If using hardware iscsi like qla4xxx this message can be ignored.
Dec 13 03:43:22.735261 iscsid[641]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Dec 13 03:43:22.735261 iscsid[641]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Dec 13 03:43:22.738207 systemd[1]: Started iscsid.service.
Dec 13 03:43:22.741661 systemd-networkd[632]: eth0: DHCPv4 address 172.24.4.219/24, gateway 172.24.4.1 acquired from 172.24.4.1
Dec 13 03:43:22.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:22.748153 systemd[1]: Starting dracut-initqueue.service...
Dec 13 03:43:22.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:22.764614 systemd[1]: Finished dracut-initqueue.service.
Dec 13 03:43:22.765990 systemd[1]: Reached target remote-fs-pre.target.
Dec 13 03:43:22.767125 systemd[1]: Reached target remote-cryptsetup.target.
Dec 13 03:43:22.768719 systemd[1]: Reached target remote-fs.target.
Dec 13 03:43:22.771398 systemd[1]: Starting dracut-pre-mount.service...
Dec 13 03:43:22.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:22.787639 systemd[1]: Finished dracut-pre-mount.service.
Dec 13 03:43:22.932589 ignition[560]: Ignition 2.14.0
Dec 13 03:43:22.932613 ignition[560]: Stage: fetch-offline
Dec 13 03:43:22.932735 ignition[560]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 03:43:22.932782 ignition[560]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a
Dec 13 03:43:22.935123 ignition[560]: no config dir at "/usr/lib/ignition/base.platform.d/openstack"
Dec 13 03:43:22.935340 ignition[560]: parsed url from cmdline: ""
Dec 13 03:43:22.938185 systemd[1]: Finished ignition-fetch-offline.service.
Dec 13 03:43:22.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:22.935350 ignition[560]: no config URL provided
Dec 13 03:43:22.941399 systemd-resolved[187]: Detected conflict on linux IN A 172.24.4.219
Dec 13 03:43:22.935367 ignition[560]: reading system config file "/usr/lib/ignition/user.ign"
Dec 13 03:43:22.941420 systemd-resolved[187]: Hostname conflict, changing published hostname from 'linux' to 'linux7'.
Dec 13 03:43:22.935386 ignition[560]: no config at "/usr/lib/ignition/user.ign"
Dec 13 03:43:22.942557 systemd[1]: Starting ignition-fetch.service...
Dec 13 03:43:22.935405 ignition[560]: failed to fetch config: resource requires networking
Dec 13 03:43:22.936296 ignition[560]: Ignition finished successfully
Dec 13 03:43:22.964981 ignition[655]: Ignition 2.14.0
Dec 13 03:43:22.965008 ignition[655]: Stage: fetch
Dec 13 03:43:22.965320 ignition[655]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 03:43:22.965364 ignition[655]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a
Dec 13 03:43:22.967655 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/openstack"
Dec 13 03:43:22.967882 ignition[655]: parsed url from cmdline: ""
Dec 13 03:43:22.967892 ignition[655]: no config URL provided
Dec 13 03:43:22.967905 ignition[655]: reading system config file "/usr/lib/ignition/user.ign"
Dec 13 03:43:22.967925 ignition[655]: no config at "/usr/lib/ignition/user.ign"
Dec 13 03:43:22.970147 ignition[655]: config drive ("/dev/disk/by-label/config-2") not found. Waiting...
Dec 13 03:43:22.970192 ignition[655]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1
Dec 13 03:43:22.970231 ignition[655]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting...
Dec 13 03:43:23.307574 ignition[655]: GET result: OK
Dec 13 03:43:23.307908 ignition[655]: parsing config with SHA512: a56c467c560821a6c91cd9e0aa00745cb7812729d35f52879c5d97c09517f885e694e6cc4fd56b0ce6ab51ef501732118773739abc32d745825e666a98965c6e
Dec 13 03:43:23.331316 unknown[655]: fetched base config from "system"
Dec 13 03:43:23.332843 unknown[655]: fetched base config from "system"
Dec 13 03:43:23.334272 unknown[655]: fetched user config from "openstack"
Dec 13 03:43:23.336887 ignition[655]: fetch: fetch complete
Dec 13 03:43:23.338124 ignition[655]: fetch: fetch passed
Dec 13 03:43:23.339386 ignition[655]: Ignition finished successfully
Dec 13 03:43:23.343701 systemd[1]: Finished ignition-fetch.service.
Dec 13 03:43:23.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:23.346984 systemd[1]: Starting ignition-kargs.service...
Dec 13 03:43:23.379023 ignition[661]: Ignition 2.14.0
Dec 13 03:43:23.379051 ignition[661]: Stage: kargs
Dec 13 03:43:23.379308 ignition[661]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 03:43:23.379355 ignition[661]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a
Dec 13 03:43:23.381657 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/openstack"
Dec 13 03:43:23.384479 ignition[661]: kargs: kargs passed
Dec 13 03:43:23.384633 ignition[661]: Ignition finished successfully
Dec 13 03:43:23.386853 systemd[1]: Finished ignition-kargs.service.
Dec 13 03:43:23.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:23.390739 systemd[1]: Starting ignition-disks.service...
Dec 13 03:43:23.407592 ignition[667]: Ignition 2.14.0
Dec 13 03:43:23.407619 ignition[667]: Stage: disks
Dec 13 03:43:23.407873 ignition[667]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 03:43:23.407919 ignition[667]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a
Dec 13 03:43:23.410186 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/openstack"
Dec 13 03:43:23.413912 ignition[667]: disks: disks passed
Dec 13 03:43:23.414025 ignition[667]: Ignition finished successfully
Dec 13 03:43:23.416133 systemd[1]: Finished ignition-disks.service.
Dec 13 03:43:23.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:23.418149 systemd[1]: Reached target initrd-root-device.target.
Dec 13 03:43:23.420261 systemd[1]: Reached target local-fs-pre.target.
Dec 13 03:43:23.422499 systemd[1]: Reached target local-fs.target.
Dec 13 03:43:23.424713 systemd[1]: Reached target sysinit.target.
Dec 13 03:43:23.427126 systemd[1]: Reached target basic.target.
Dec 13 03:43:23.431276 systemd[1]: Starting systemd-fsck-root.service...
Dec 13 03:43:23.463538 systemd-fsck[675]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks
Dec 13 03:43:23.477759 systemd[1]: Finished systemd-fsck-root.service.
Dec 13 03:43:23.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:23.480763 systemd[1]: Mounting sysroot.mount...
Dec 13 03:43:23.504587 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Dec 13 03:43:23.506409 systemd[1]: Mounted sysroot.mount.
Dec 13 03:43:23.508853 systemd[1]: Reached target initrd-root-fs.target.
Dec 13 03:43:23.514034 systemd[1]: Mounting sysroot-usr.mount...
Dec 13 03:43:23.517415 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met.
Dec 13 03:43:23.521165 systemd[1]: Starting flatcar-openstack-hostname.service...
Dec 13 03:43:23.522469 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Dec 13 03:43:23.522585 systemd[1]: Reached target ignition-diskful.target.
Dec 13 03:43:23.526192 systemd[1]: Mounted sysroot-usr.mount.
Dec 13 03:43:23.536569 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Dec 13 03:43:23.539775 systemd[1]: Starting initrd-setup-root.service...
Dec 13 03:43:23.553455 initrd-setup-root[687]: cut: /sysroot/etc/passwd: No such file or directory
Dec 13 03:43:23.573858 initrd-setup-root[695]: cut: /sysroot/etc/group: No such file or directory
Dec 13 03:43:23.577591 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (682)
Dec 13 03:43:23.586020 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 03:43:23.586070 kernel: BTRFS info (device vda6): using free space tree
Dec 13 03:43:23.586083 kernel: BTRFS info (device vda6): has skinny extents
Dec 13 03:43:23.587877 initrd-setup-root[707]: cut: /sysroot/etc/shadow: No such file or directory
Dec 13 03:43:23.592925 initrd-setup-root[727]: cut: /sysroot/etc/gshadow: No such file or directory
Dec 13 03:43:23.606316 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Dec 13 03:43:23.672670 systemd[1]: Finished initrd-setup-root.service.
Dec 13 03:43:23.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:23.674122 systemd[1]: Starting ignition-mount.service...
Dec 13 03:43:23.678831 systemd[1]: Starting sysroot-boot.service...
Dec 13 03:43:23.688383 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully.
Dec 13 03:43:23.689138 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully.
Dec 13 03:43:23.706888 ignition[749]: INFO     : Ignition 2.14.0
Dec 13 03:43:23.707695 ignition[749]: INFO     : Stage: mount
Dec 13 03:43:23.708347 ignition[749]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 03:43:23.709172 ignition[749]: DEBUG    : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a
Dec 13 03:43:23.711344 ignition[749]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/openstack"
Dec 13 03:43:23.713365 ignition[749]: INFO     : mount: mount passed
Dec 13 03:43:23.713950 ignition[749]: INFO     : Ignition finished successfully
Dec 13 03:43:23.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:23.715651 systemd[1]: Finished ignition-mount.service.
Dec 13 03:43:23.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:23.739409 systemd[1]: Finished sysroot-boot.service.
Dec 13 03:43:23.759635 coreos-metadata[681]: Dec 13 03:43:23.759 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1
Dec 13 03:43:23.781093 coreos-metadata[681]: Dec 13 03:43:23.781 INFO Fetch successful
Dec 13 03:43:23.781697 coreos-metadata[681]: Dec 13 03:43:23.781 INFO wrote hostname ci-3510-3-6-b-896f86a818.novalocal to /sysroot/etc/hostname
Dec 13 03:43:23.787104 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully.
Dec 13 03:43:23.787327 systemd[1]: Finished flatcar-openstack-hostname.service.
Dec 13 03:43:23.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:23.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:23.791631 systemd[1]: Starting ignition-files.service...
Dec 13 03:43:23.807500 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Dec 13 03:43:23.819619 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (758)
Dec 13 03:43:23.831568 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 03:43:23.831665 kernel: BTRFS info (device vda6): using free space tree
Dec 13 03:43:23.831688 kernel: BTRFS info (device vda6): has skinny extents
Dec 13 03:43:23.844974 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Dec 13 03:43:23.864186 ignition[777]: INFO     : Ignition 2.14.0
Dec 13 03:43:23.864186 ignition[777]: INFO     : Stage: files
Dec 13 03:43:23.867103 ignition[777]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 03:43:23.867103 ignition[777]: DEBUG    : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a
Dec 13 03:43:23.867103 ignition[777]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/openstack"
Dec 13 03:43:23.873474 ignition[777]: DEBUG    : files: compiled without relabeling support, skipping
Dec 13 03:43:23.873474 ignition[777]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Dec 13 03:43:23.873474 ignition[777]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Dec 13 03:43:23.879009 ignition[777]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Dec 13 03:43:23.879009 ignition[777]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Dec 13 03:43:23.879009 ignition[777]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Dec 13 03:43:23.878998 unknown[777]: wrote ssh authorized keys file for user: core
Dec 13 03:43:23.888600 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/etc/flatcar-cgroupv1"
Dec 13 03:43:23.888600 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1"
Dec 13 03:43:23.888600 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Dec 13 03:43:23.888600 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Dec 13 03:43:23.947176 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Dec 13 03:43:24.253989 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Dec 13 03:43:24.255141 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/install.sh"
Dec 13 03:43:24.256189 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh"
Dec 13 03:43:24.257014 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nginx.yaml"
Dec 13 03:43:24.258140 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml"
Dec 13 03:43:24.259188 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Dec 13 03:43:24.259188 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Dec 13 03:43:24.259188 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Dec 13 03:43:24.259188 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Dec 13 03:43:24.262535 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 03:43:24.262535 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 03:43:24.262535 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 03:43:24.262535 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 03:43:24.262535 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 03:43:24.262535 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1
Dec 13 03:43:24.416257 systemd-networkd[632]: eth0: Gained IPv6LL
Dec 13 03:43:24.806449 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET result: OK
Dec 13 03:43:26.616825 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 03:43:26.619945 ignition[777]: INFO     : files: op(c): [started]  processing unit "coreos-metadata-sshkeys@.service"
Dec 13 03:43:26.621771 ignition[777]: INFO     : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service"
Dec 13 03:43:26.621771 ignition[777]: INFO     : files: op(d): [started]  processing unit "containerd.service"
Dec 13 03:43:26.802758 ignition[777]: INFO     : files: op(d): op(e): [started]  writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Dec 13 03:43:26.805551 ignition[777]: INFO     : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Dec 13 03:43:26.805551 ignition[777]: INFO     : files: op(d): [finished] processing unit "containerd.service"
Dec 13 03:43:26.805551 ignition[777]: INFO     : files: op(f): [started]  processing unit "prepare-helm.service"
Dec 13 03:43:26.805551 ignition[777]: INFO     : files: op(f): op(10): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Dec 13 03:43:26.805551 ignition[777]: INFO     : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Dec 13 03:43:26.805551 ignition[777]: INFO     : files: op(f): [finished] processing unit "prepare-helm.service"
Dec 13 03:43:26.805551 ignition[777]: INFO     : files: op(11): [started]  setting preset to enabled for "coreos-metadata-sshkeys@.service "
Dec 13 03:43:26.805551 ignition[777]: INFO     : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service "
Dec 13 03:43:26.805551 ignition[777]: INFO     : files: op(12): [started]  setting preset to enabled for "prepare-helm.service"
Dec 13 03:43:26.805551 ignition[777]: INFO     : files: op(12): [finished] setting preset to enabled for "prepare-helm.service"
Dec 13 03:43:26.863129 ignition[777]: INFO     : files: createResultFile: createFiles: op(13): [started]  writing file "/sysroot/etc/.ignition-result.json"
Dec 13 03:43:26.863129 ignition[777]: INFO     : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json"
Dec 13 03:43:26.863129 ignition[777]: INFO     : files: files passed
Dec 13 03:43:26.863129 ignition[777]: INFO     : Ignition finished successfully
Dec 13 03:43:26.886399 kernel: kauditd_printk_skb: 27 callbacks suppressed
Dec 13 03:43:26.886453 kernel: audit: type=1130 audit(1734061406.870:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:26.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:26.866162 systemd[1]: Finished ignition-files.service.
Dec 13 03:43:26.875404 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Dec 13 03:43:26.885291 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Dec 13 03:43:26.887701 systemd[1]: Starting ignition-quench.service...
Dec 13 03:43:26.908508 systemd[1]: ignition-quench.service: Deactivated successfully.
Dec 13 03:43:26.908758 systemd[1]: Finished ignition-quench.service.
Dec 13 03:43:26.930692 kernel: audit: type=1130 audit(1734061406.910:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:26.930744 kernel: audit: type=1131 audit(1734061406.910:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:26.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:26.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.246613 initrd-setup-root-after-ignition[802]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 03:43:27.248062 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Dec 13 03:43:27.263269 kernel: audit: type=1130 audit(1734061407.250:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.250931 systemd[1]: Reached target ignition-complete.target.
Dec 13 03:43:27.267643 systemd[1]: Starting initrd-parse-etc.service...
Dec 13 03:43:27.299306 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 13 03:43:27.301032 systemd[1]: Finished initrd-parse-etc.service.
Dec 13 03:43:27.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.307909 systemd[1]: Reached target initrd-fs.target.
Dec 13 03:43:27.320793 kernel: audit: type=1130 audit(1734061407.302:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.320845 kernel: audit: type=1131 audit(1734061407.307:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.321789 systemd[1]: Reached target initrd.target.
Dec 13 03:43:27.323062 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Dec 13 03:43:27.324966 systemd[1]: Starting dracut-pre-pivot.service...
Dec 13 03:43:27.355328 systemd[1]: Finished dracut-pre-pivot.service.
Dec 13 03:43:27.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.359211 systemd[1]: Starting initrd-cleanup.service...
Dec 13 03:43:27.368786 kernel: audit: type=1130 audit(1734061407.356:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.385727 systemd[1]: Stopped target nss-lookup.target.
Dec 13 03:43:27.388846 systemd[1]: Stopped target remote-cryptsetup.target.
Dec 13 03:43:27.390888 systemd[1]: Stopped target timers.target.
Dec 13 03:43:27.393260 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 13 03:43:27.406244 kernel: audit: type=1131 audit(1734061407.395:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.393732 systemd[1]: Stopped dracut-pre-pivot.service.
Dec 13 03:43:27.396172 systemd[1]: Stopped target initrd.target.
Dec 13 03:43:27.407695 systemd[1]: Stopped target basic.target.
Dec 13 03:43:27.410209 systemd[1]: Stopped target ignition-complete.target.
Dec 13 03:43:27.412332 systemd[1]: Stopped target ignition-diskful.target.
Dec 13 03:43:27.414737 systemd[1]: Stopped target initrd-root-device.target.
Dec 13 03:43:27.417005 systemd[1]: Stopped target remote-fs.target.
Dec 13 03:43:27.419004 systemd[1]: Stopped target remote-fs-pre.target.
Dec 13 03:43:27.421243 systemd[1]: Stopped target sysinit.target.
Dec 13 03:43:27.423367 systemd[1]: Stopped target local-fs.target.
Dec 13 03:43:27.425642 systemd[1]: Stopped target local-fs-pre.target.
Dec 13 03:43:27.427487 systemd[1]: Stopped target swap.target.
Dec 13 03:43:27.429195 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 13 03:43:27.429729 systemd[1]: Stopped dracut-pre-mount.service.
Dec 13 03:43:27.434564 kernel: audit: type=1131 audit(1734061407.430:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.431418 systemd[1]: Stopped target cryptsetup.target.
Dec 13 03:43:27.435827 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 13 03:43:27.436177 systemd[1]: Stopped dracut-initqueue.service.
Dec 13 03:43:27.441270 kernel: audit: type=1131 audit(1734061407.437:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.438124 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Dec 13 03:43:27.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.438491 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Dec 13 03:43:27.442884 systemd[1]: ignition-files.service: Deactivated successfully.
Dec 13 03:43:27.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.443235 systemd[1]: Stopped ignition-files.service.
Dec 13 03:43:27.446943 systemd[1]: Stopping ignition-mount.service...
Dec 13 03:43:27.456372 ignition[815]: INFO     : Ignition 2.14.0
Dec 13 03:43:27.456600 systemd[1]: Stopping iscsiuio.service...
Dec 13 03:43:27.458248 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 13 03:43:27.459264 ignition[815]: INFO     : Stage: umount
Dec 13 03:43:27.459264 ignition[815]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 03:43:27.459264 ignition[815]: DEBUG    : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a
Dec 13 03:43:27.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.458900 systemd[1]: Stopped kmod-static-nodes.service.
Dec 13 03:43:27.468191 ignition[815]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/openstack"
Dec 13 03:43:27.468191 ignition[815]: INFO     : umount: umount passed
Dec 13 03:43:27.468191 ignition[815]: INFO     : Ignition finished successfully
Dec 13 03:43:27.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.462683 systemd[1]: Stopping sysroot-boot.service...
Dec 13 03:43:27.471371 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 13 03:43:27.471620 systemd[1]: Stopped systemd-udev-trigger.service.
Dec 13 03:43:27.472417 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 13 03:43:27.472591 systemd[1]: Stopped dracut-pre-trigger.service.
Dec 13 03:43:27.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.479427 systemd[1]: iscsiuio.service: Deactivated successfully.
Dec 13 03:43:27.479542 systemd[1]: Stopped iscsiuio.service.
Dec 13 03:43:27.480359 systemd[1]: ignition-mount.service: Deactivated successfully.
Dec 13 03:43:27.480444 systemd[1]: Stopped ignition-mount.service.
Dec 13 03:43:27.481274 systemd[1]: ignition-disks.service: Deactivated successfully.
Dec 13 03:43:27.481376 systemd[1]: Stopped ignition-disks.service.
Dec 13 03:43:27.481863 systemd[1]: ignition-kargs.service: Deactivated successfully.
Dec 13 03:43:27.481900 systemd[1]: Stopped ignition-kargs.service.
Dec 13 03:43:27.482342 systemd[1]: ignition-fetch.service: Deactivated successfully.
Dec 13 03:43:27.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.482377 systemd[1]: Stopped ignition-fetch.service.
Dec 13 03:43:27.482870 systemd[1]: Stopped target network.target.
Dec 13 03:43:27.483268 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Dec 13 03:43:27.483307 systemd[1]: Stopped ignition-fetch-offline.service.
Dec 13 03:43:27.483777 systemd[1]: Stopped target paths.target.
Dec 13 03:43:27.484896 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 13 03:43:27.488588 systemd[1]: Stopped systemd-ask-password-console.path.
Dec 13 03:43:27.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.489102 systemd[1]: Stopped target slices.target.
Dec 13 03:43:27.489481 systemd[1]: Stopped target sockets.target.
Dec 13 03:43:27.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.489940 systemd[1]: iscsid.socket: Deactivated successfully.
Dec 13 03:43:27.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.489968 systemd[1]: Closed iscsid.socket.
Dec 13 03:43:27.490353 systemd[1]: iscsiuio.socket: Deactivated successfully.
Dec 13 03:43:27.490382 systemd[1]: Closed iscsiuio.socket.
Dec 13 03:43:27.490814 systemd[1]: ignition-setup.service: Deactivated successfully.
Dec 13 03:43:27.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.490851 systemd[1]: Stopped ignition-setup.service.
Dec 13 03:43:27.491491 systemd[1]: Stopping systemd-networkd.service...
Dec 13 03:43:27.494118 systemd[1]: Stopping systemd-resolved.service...
Dec 13 03:43:27.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.494731 systemd-networkd[632]: eth0: DHCPv6 lease lost
Dec 13 03:43:27.510000 audit: BPF prog-id=9 op=UNLOAD
Dec 13 03:43:27.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.497827 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Dec 13 03:43:27.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.498470 systemd[1]: systemd-networkd.service: Deactivated successfully.
Dec 13 03:43:27.498591 systemd[1]: Stopped systemd-networkd.service.
Dec 13 03:43:27.501029 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 13 03:43:27.501149 systemd[1]: Finished initrd-cleanup.service.
Dec 13 03:43:27.502437 systemd[1]: sysroot-boot.service: Deactivated successfully.
Dec 13 03:43:27.502554 systemd[1]: Stopped sysroot-boot.service.
Dec 13 03:43:27.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.504306 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Dec 13 03:43:27.504339 systemd[1]: Closed systemd-networkd.socket.
Dec 13 03:43:27.505186 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Dec 13 03:43:27.505235 systemd[1]: Stopped initrd-setup-root.service.
Dec 13 03:43:27.523000 audit: BPF prog-id=6 op=UNLOAD
Dec 13 03:43:27.506990 systemd[1]: Stopping network-cleanup.service...
Dec 13 03:43:27.508972 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Dec 13 03:43:27.509033 systemd[1]: Stopped parse-ip-for-networkd.service.
Dec 13 03:43:27.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.510047 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 03:43:27.510103 systemd[1]: Stopped systemd-sysctl.service.
Dec 13 03:43:27.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.511273 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 03:43:27.511311 systemd[1]: Stopped systemd-modules-load.service.
Dec 13 03:43:27.512129 systemd[1]: Stopping systemd-udevd.service...
Dec 13 03:43:27.519375 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 13 03:43:27.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.519900 systemd[1]: systemd-resolved.service: Deactivated successfully.
Dec 13 03:43:27.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.520000 systemd[1]: Stopped systemd-resolved.service.
Dec 13 03:43:27.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.524330 systemd[1]: network-cleanup.service: Deactivated successfully.
Dec 13 03:43:27.525351 systemd[1]: Stopped network-cleanup.service.
Dec 13 03:43:27.526498 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 13 03:43:27.526706 systemd[1]: Stopped systemd-udevd.service.
Dec 13 03:43:27.528965 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 13 03:43:27.529017 systemd[1]: Closed systemd-udevd-control.socket.
Dec 13 03:43:27.530121 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 13 03:43:27.530152 systemd[1]: Closed systemd-udevd-kernel.socket.
Dec 13 03:43:27.531029 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 13 03:43:27.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.531093 systemd[1]: Stopped dracut-pre-udev.service.
Dec 13 03:43:27.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:27.532031 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 13 03:43:27.532077 systemd[1]: Stopped dracut-cmdline.service.
Dec 13 03:43:27.533215 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Dec 13 03:43:27.533265 systemd[1]: Stopped dracut-cmdline-ask.service.
Dec 13 03:43:27.535159 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Dec 13 03:43:27.535817 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 03:43:27.535868 systemd[1]: Stopped systemd-vconsole-setup.service.
Dec 13 03:43:27.543931 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 13 03:43:27.544063 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Dec 13 03:43:27.545132 systemd[1]: Reached target initrd-switch-root.target.
Dec 13 03:43:27.546909 systemd[1]: Starting initrd-switch-root.service...
Dec 13 03:43:27.558375 systemd[1]: Switching root.
Dec 13 03:43:27.560000 audit: BPF prog-id=5 op=UNLOAD
Dec 13 03:43:27.560000 audit: BPF prog-id=4 op=UNLOAD
Dec 13 03:43:27.561000 audit: BPF prog-id=3 op=UNLOAD
Dec 13 03:43:27.561000 audit: BPF prog-id=8 op=UNLOAD
Dec 13 03:43:27.561000 audit: BPF prog-id=7 op=UNLOAD
Dec 13 03:43:27.582813 iscsid[641]: iscsid shutting down.
Dec 13 03:43:27.583767 systemd-journald[185]: Received SIGTERM from PID 1 (n/a).
Dec 13 03:43:27.583848 systemd-journald[185]: Journal stopped
Dec 13 03:43:32.108066 kernel: SELinux:  Class mctp_socket not defined in policy.
Dec 13 03:43:32.108114 kernel: SELinux:  Class anon_inode not defined in policy.
Dec 13 03:43:32.108133 kernel: SELinux: the above unknown classes and permissions will be allowed
Dec 13 03:43:32.108145 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 03:43:32.108156 kernel: SELinux:  policy capability open_perms=1
Dec 13 03:43:32.108167 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 03:43:32.108178 kernel: SELinux:  policy capability always_check_network=0
Dec 13 03:43:32.108193 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 03:43:32.108203 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 03:43:32.108215 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Dec 13 03:43:32.108226 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Dec 13 03:43:32.108238 systemd[1]: Successfully loaded SELinux policy in 107.065ms.
Dec 13 03:43:32.108254 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.694ms.
Dec 13 03:43:32.108268 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 03:43:32.108280 systemd[1]: Detected virtualization kvm.
Dec 13 03:43:32.108292 systemd[1]: Detected architecture x86-64.
Dec 13 03:43:32.108303 systemd[1]: Detected first boot.
Dec 13 03:43:32.108317 systemd[1]: Hostname set to <ci-3510-3-6-b-896f86a818.novalocal>.
Dec 13 03:43:32.108328 systemd[1]: Initializing machine ID from VM UUID.
Dec 13 03:43:32.108340 kernel: SELinux:  Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped).
Dec 13 03:43:32.108351 systemd[1]: Populated /etc with preset unit settings.
Dec 13 03:43:32.108363 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 03:43:32.108376 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 03:43:32.108389 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 03:43:32.108403 systemd[1]: Queued start job for default target multi-user.target.
Dec 13 03:43:32.108414 systemd[1]: Unnecessary job was removed for dev-vda6.device.
Dec 13 03:43:32.108426 systemd[1]: Created slice system-addon\x2dconfig.slice.
Dec 13 03:43:32.108438 systemd[1]: Created slice system-addon\x2drun.slice.
Dec 13 03:43:32.108449 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice.
Dec 13 03:43:32.108463 systemd[1]: Created slice system-getty.slice.
Dec 13 03:43:32.108475 systemd[1]: Created slice system-modprobe.slice.
Dec 13 03:43:32.108487 systemd[1]: Created slice system-serial\x2dgetty.slice.
Dec 13 03:43:32.108499 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Dec 13 03:43:32.108510 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Dec 13 03:43:32.116614 systemd[1]: Created slice user.slice.
Dec 13 03:43:32.116632 systemd[1]: Started systemd-ask-password-console.path.
Dec 13 03:43:32.116645 systemd[1]: Started systemd-ask-password-wall.path.
Dec 13 03:43:32.116657 systemd[1]: Set up automount boot.automount.
Dec 13 03:43:32.116668 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Dec 13 03:43:32.116680 systemd[1]: Reached target integritysetup.target.
Dec 13 03:43:32.116694 systemd[1]: Reached target remote-cryptsetup.target.
Dec 13 03:43:32.116706 systemd[1]: Reached target remote-fs.target.
Dec 13 03:43:32.116717 systemd[1]: Reached target slices.target.
Dec 13 03:43:32.116729 systemd[1]: Reached target swap.target.
Dec 13 03:43:32.116740 systemd[1]: Reached target torcx.target.
Dec 13 03:43:32.116752 systemd[1]: Reached target veritysetup.target.
Dec 13 03:43:32.116763 systemd[1]: Listening on systemd-coredump.socket.
Dec 13 03:43:32.116776 systemd[1]: Listening on systemd-initctl.socket.
Dec 13 03:43:32.116787 kernel: kauditd_printk_skb: 47 callbacks suppressed
Dec 13 03:43:32.116799 kernel: audit: type=1400 audit(1734061411.938:88): avc:  denied  { audit_read } for  pid=1 comm="systemd" capability=37  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Dec 13 03:43:32.116810 systemd[1]: Listening on systemd-journald-audit.socket.
Dec 13 03:43:32.116822 kernel: audit: type=1335 audit(1734061411.938:89): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1
Dec 13 03:43:32.116833 systemd[1]: Listening on systemd-journald-dev-log.socket.
Dec 13 03:43:32.116844 systemd[1]: Listening on systemd-journald.socket.
Dec 13 03:43:32.116855 systemd[1]: Listening on systemd-networkd.socket.
Dec 13 03:43:32.116866 systemd[1]: Listening on systemd-udevd-control.socket.
Dec 13 03:43:32.116879 systemd[1]: Listening on systemd-udevd-kernel.socket.
Dec 13 03:43:32.116891 systemd[1]: Listening on systemd-userdbd.socket.
Dec 13 03:43:32.116902 systemd[1]: Mounting dev-hugepages.mount...
Dec 13 03:43:32.116914 systemd[1]: Mounting dev-mqueue.mount...
Dec 13 03:43:32.116925 systemd[1]: Mounting media.mount...
Dec 13 03:43:32.116937 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 03:43:32.116948 systemd[1]: Mounting sys-kernel-debug.mount...
Dec 13 03:43:32.116959 systemd[1]: Mounting sys-kernel-tracing.mount...
Dec 13 03:43:32.116971 systemd[1]: Mounting tmp.mount...
Dec 13 03:43:32.116984 systemd[1]: Starting flatcar-tmpfiles.service...
Dec 13 03:43:32.116995 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 03:43:32.117007 systemd[1]: Starting kmod-static-nodes.service...
Dec 13 03:43:32.117018 systemd[1]: Starting modprobe@configfs.service...
Dec 13 03:43:32.117030 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 03:43:32.117041 systemd[1]: Starting modprobe@drm.service...
Dec 13 03:43:32.117064 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 03:43:32.117076 systemd[1]: Starting modprobe@fuse.service...
Dec 13 03:43:32.117087 systemd[1]: Starting modprobe@loop.service...
Dec 13 03:43:32.117101 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Dec 13 03:43:32.117117 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
Dec 13 03:43:32.117129 systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
Dec 13 03:43:32.117140 systemd[1]: Starting systemd-journald.service...
Dec 13 03:43:32.117151 systemd[1]: Starting systemd-modules-load.service...
Dec 13 03:43:32.117163 systemd[1]: Starting systemd-network-generator.service...
Dec 13 03:43:32.117174 systemd[1]: Starting systemd-remount-fs.service...
Dec 13 03:43:32.117186 systemd[1]: Starting systemd-udev-trigger.service...
Dec 13 03:43:32.117200 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 03:43:32.117211 systemd[1]: Mounted dev-hugepages.mount.
Dec 13 03:43:32.117222 systemd[1]: Mounted dev-mqueue.mount.
Dec 13 03:43:32.117234 systemd[1]: Mounted media.mount.
Dec 13 03:43:32.117245 systemd[1]: Mounted sys-kernel-debug.mount.
Dec 13 03:43:32.117256 systemd[1]: Mounted sys-kernel-tracing.mount.
Dec 13 03:43:32.117268 systemd[1]: Mounted tmp.mount.
Dec 13 03:43:32.117279 systemd[1]: Finished kmod-static-nodes.service.
Dec 13 03:43:32.117291 kernel: audit: type=1130 audit(1734061412.082:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.117304 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 03:43:32.117315 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 03:43:32.117327 kernel: audit: type=1130 audit(1734061412.089:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.117337 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 03:43:32.117349 kernel: audit: type=1131 audit(1734061412.089:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.117360 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 03:43:32.117372 systemd[1]: Finished systemd-modules-load.service.
Dec 13 03:43:32.117383 kernel: audit: type=1130 audit(1734061412.101:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.117396 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 03:43:32.117408 kernel: audit: type=1131 audit(1734061412.101:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.117419 systemd[1]: Finished modprobe@drm.service.
Dec 13 03:43:32.117430 systemd[1]: Finished systemd-network-generator.service.
Dec 13 03:43:32.117445 systemd-journald[952]: Journal started
Dec 13 03:43:32.117488 systemd-journald[952]: Runtime Journal (/run/log/journal/a7ae3c29f55640da83fde30ebb3bfc83) is 4.9M, max 39.5M, 34.5M free.
Dec 13 03:43:31.938000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1
Dec 13 03:43:32.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.129595 kernel: audit: type=1305 audit(1734061412.106:95): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Dec 13 03:43:32.129626 systemd[1]: Started systemd-journald.service.
Dec 13 03:43:32.129644 kernel: audit: type=1300 audit(1734061412.106:95): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffcc9b0cc50 a2=4000 a3=7ffcc9b0ccec items=0 ppid=1 pid=952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:43:32.106000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Dec 13 03:43:32.106000 audit[952]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffcc9b0cc50 a2=4000 a3=7ffcc9b0ccec items=0 ppid=1 pid=952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:43:32.131806 systemd[1]: Finished systemd-remount-fs.service.
Dec 13 03:43:32.132844 systemd[1]: Reached target network-pre.target.
Dec 13 03:43:32.133302 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Dec 13 03:43:32.106000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Dec 13 03:43:32.139528 kernel: audit: type=1327 audit(1734061412.106:95): proctitle="/usr/lib/systemd/systemd-journald"
Dec 13 03:43:32.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.142978 systemd[1]: Starting systemd-hwdb-update.service...
Dec 13 03:43:32.144885 systemd[1]: Starting systemd-journal-flush.service...
Dec 13 03:43:32.145434 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 03:43:32.146567 systemd[1]: Starting systemd-random-seed.service...
Dec 13 03:43:32.148124 systemd[1]: Starting systemd-sysctl.service...
Dec 13 03:43:32.154367 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 03:43:32.156392 systemd-journald[952]: Time spent on flushing to /var/log/journal/a7ae3c29f55640da83fde30ebb3bfc83 is 35.782ms for 1024 entries.
Dec 13 03:43:32.156392 systemd-journald[952]: System Journal (/var/log/journal/a7ae3c29f55640da83fde30ebb3bfc83) is 8.0M, max 584.8M, 576.8M free.
Dec 13 03:43:32.227698 systemd-journald[952]: Received client request to flush runtime journal.
Dec 13 03:43:32.227747 kernel: loop: module loaded
Dec 13 03:43:32.228558 kernel: fuse: init (API version 7.34)
Dec 13 03:43:32.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.157888 systemd[1]: Finished modprobe@configfs.service.
Dec 13 03:43:32.159786 systemd[1]: Mounting sys-kernel-config.mount...
Dec 13 03:43:32.163882 systemd[1]: Mounted sys-kernel-config.mount.
Dec 13 03:43:32.176682 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 03:43:32.176898 systemd[1]: Finished modprobe@loop.service.
Dec 13 03:43:32.177649 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 03:43:32.180478 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 13 03:43:32.180673 systemd[1]: Finished modprobe@fuse.service.
Dec 13 03:43:32.182437 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Dec 13 03:43:32.191805 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Dec 13 03:43:32.194259 systemd[1]: Finished systemd-random-seed.service.
Dec 13 03:43:32.194826 systemd[1]: Reached target first-boot-complete.target.
Dec 13 03:43:32.195608 systemd[1]: Finished systemd-sysctl.service.
Dec 13 03:43:32.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.216115 systemd[1]: Finished flatcar-tmpfiles.service.
Dec 13 03:43:32.218028 systemd[1]: Starting systemd-sysusers.service...
Dec 13 03:43:32.232977 systemd[1]: Finished systemd-journal-flush.service.
Dec 13 03:43:32.260551 systemd[1]: Finished systemd-sysusers.service.
Dec 13 03:43:32.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.262168 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Dec 13 03:43:32.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.262892 systemd[1]: Finished systemd-udev-trigger.service.
Dec 13 03:43:32.264344 systemd[1]: Starting systemd-udev-settle.service...
Dec 13 03:43:32.276461 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Dec 13 03:43:32.305348 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Dec 13 03:43:32.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.902693 systemd[1]: Finished systemd-hwdb-update.service.
Dec 13 03:43:32.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.906324 systemd[1]: Starting systemd-udevd.service...
Dec 13 03:43:32.953095 systemd-udevd[1017]: Using default interface naming scheme 'v252'.
Dec 13 03:43:32.994083 systemd[1]: Started systemd-udevd.service.
Dec 13 03:43:32.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:32.996068 systemd[1]: Starting systemd-networkd.service...
Dec 13 03:43:33.015461 systemd[1]: Starting systemd-userdbd.service...
Dec 13 03:43:33.078747 systemd[1]: Found device dev-ttyS0.device.
Dec 13 03:43:33.105543 systemd[1]: Started systemd-userdbd.service.
Dec 13 03:43:33.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:33.135094 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Dec 13 03:43:33.154533 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
Dec 13 03:43:33.165000 audit[1024]: AVC avc:  denied  { confidentiality } for  pid=1024 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Dec 13 03:43:33.165000 audit[1024]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e1738e9fd0 a1=337fc a2=7f45a6589bc5 a3=5 items=110 ppid=1017 pid=1024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:43:33.165000 audit: CWD cwd="/"
Dec 13 03:43:33.165000 audit: PATH item=0 name=(null) inode=1038 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=1 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=2 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=3 name=(null) inode=14452 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=4 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=5 name=(null) inode=14453 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=6 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=7 name=(null) inode=14454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=8 name=(null) inode=14454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=9 name=(null) inode=14455 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=10 name=(null) inode=14454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=11 name=(null) inode=14456 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=12 name=(null) inode=14454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=13 name=(null) inode=14457 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=14 name=(null) inode=14454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=15 name=(null) inode=14458 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=16 name=(null) inode=14454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=17 name=(null) inode=14459 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=18 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=19 name=(null) inode=14460 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=20 name=(null) inode=14460 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=21 name=(null) inode=14461 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=22 name=(null) inode=14460 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=23 name=(null) inode=14462 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=24 name=(null) inode=14460 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=25 name=(null) inode=14463 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=26 name=(null) inode=14460 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=27 name=(null) inode=14464 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=28 name=(null) inode=14460 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=29 name=(null) inode=14465 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=30 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=31 name=(null) inode=14466 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=32 name=(null) inode=14466 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=33 name=(null) inode=14467 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=34 name=(null) inode=14466 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=35 name=(null) inode=14468 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=36 name=(null) inode=14466 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=37 name=(null) inode=14469 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=38 name=(null) inode=14466 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=39 name=(null) inode=14470 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=40 name=(null) inode=14466 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=41 name=(null) inode=14471 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=42 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=43 name=(null) inode=14472 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=44 name=(null) inode=14472 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=45 name=(null) inode=14473 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=46 name=(null) inode=14472 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=47 name=(null) inode=14474 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=48 name=(null) inode=14472 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=49 name=(null) inode=14475 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=50 name=(null) inode=14472 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=51 name=(null) inode=14476 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=52 name=(null) inode=14472 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=53 name=(null) inode=14477 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=54 name=(null) inode=1038 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=55 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=56 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=57 name=(null) inode=14479 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=58 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=59 name=(null) inode=14480 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=60 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=61 name=(null) inode=14481 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=62 name=(null) inode=14481 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=63 name=(null) inode=14482 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=64 name=(null) inode=14481 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=65 name=(null) inode=14483 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=66 name=(null) inode=14481 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=67 name=(null) inode=14484 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=68 name=(null) inode=14481 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=69 name=(null) inode=14485 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=70 name=(null) inode=14481 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=71 name=(null) inode=14486 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=72 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=73 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=74 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=75 name=(null) inode=14488 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=76 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=77 name=(null) inode=14489 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=78 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=79 name=(null) inode=14490 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=80 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=81 name=(null) inode=14491 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=82 name=(null) inode=14487 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=83 name=(null) inode=14492 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=84 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=85 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=86 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=87 name=(null) inode=14494 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=88 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=89 name=(null) inode=14495 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=90 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=91 name=(null) inode=14496 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=92 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=93 name=(null) inode=14497 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=94 name=(null) inode=14493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=95 name=(null) inode=14498 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=96 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=97 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=98 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=99 name=(null) inode=14500 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=100 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=101 name=(null) inode=14501 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=102 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=103 name=(null) inode=14502 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=104 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=105 name=(null) inode=14503 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=106 name=(null) inode=14499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=107 name=(null) inode=14504 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PATH item=109 name=(null) inode=14505 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:43:33.165000 audit: PROCTITLE proctitle="(udev-worker)"
Dec 13 03:43:33.191549 kernel: ACPI: button: Power Button [PWRF]
Dec 13 03:43:33.231922 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 13 03:43:33.237716 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
Dec 13 03:43:33.257535 kernel: mousedev: PS/2 mouse device common for all mice
Dec 13 03:43:33.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:33.289920 systemd[1]: Finished systemd-udev-settle.service.
Dec 13 03:43:33.291680 systemd[1]: Starting lvm2-activation-early.service...
Dec 13 03:43:33.793770 lvm[1046]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 03:43:33.880593 systemd[1]: Finished lvm2-activation-early.service.
Dec 13 03:43:33.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:33.882012 systemd[1]: Reached target cryptsetup.target.
Dec 13 03:43:33.885449 systemd[1]: Starting lvm2-activation.service...
Dec 13 03:43:33.895957 lvm[1049]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 03:43:33.988463 systemd[1]: Finished lvm2-activation.service.
Dec 13 03:43:33.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:33.989992 systemd[1]: Reached target local-fs-pre.target.
Dec 13 03:43:33.991210 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 13 03:43:33.991297 systemd[1]: Reached target local-fs.target.
Dec 13 03:43:33.992449 systemd[1]: Reached target machines.target.
Dec 13 03:43:33.996674 systemd[1]: Starting ldconfig.service...
Dec 13 03:43:34.062220 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 03:43:34.062664 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 03:43:34.065453 systemd[1]: Starting systemd-boot-update.service...
Dec 13 03:43:34.069031 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Dec 13 03:43:34.073134 systemd[1]: Starting systemd-machine-id-commit.service...
Dec 13 03:43:34.076880 systemd[1]: Starting systemd-sysext.service...
Dec 13 03:43:34.266126 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1052 (bootctl)
Dec 13 03:43:34.268732 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Dec 13 03:43:34.307638 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Dec 13 03:43:34.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:34.410979 systemd[1]: Unmounting usr-share-oem.mount...
Dec 13 03:43:34.419015 systemd[1]: usr-share-oem.mount: Deactivated successfully.
Dec 13 03:43:34.419724 systemd[1]: Unmounted usr-share-oem.mount.
Dec 13 03:43:34.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:34.675868 systemd-networkd[1020]: lo: Link UP
Dec 13 03:43:34.675885 systemd-networkd[1020]: lo: Gained carrier
Dec 13 03:43:34.677150 systemd-networkd[1020]: Enumeration completed
Dec 13 03:43:34.677421 systemd-networkd[1020]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 03:43:34.677674 systemd[1]: Started systemd-networkd.service.
Dec 13 03:43:34.786993 systemd-networkd[1020]: eth0: Link UP
Dec 13 03:43:34.787003 systemd-networkd[1020]: eth0: Gained carrier
Dec 13 03:43:34.807738 systemd-networkd[1020]: eth0: DHCPv4 address 172.24.4.219/24, gateway 172.24.4.1 acquired from 172.24.4.1
Dec 13 03:43:35.423599 kernel: loop0: detected capacity change from 0 to 211296
Dec 13 03:43:35.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:35.577917 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Dec 13 03:43:35.579743 systemd[1]: Finished systemd-machine-id-commit.service.
Dec 13 03:43:35.627801 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Dec 13 03:43:35.661759 kernel: loop1: detected capacity change from 0 to 211296
Dec 13 03:43:35.710404 (sd-sysext)[1068]: Using extensions 'kubernetes'.
Dec 13 03:43:35.710987 (sd-sysext)[1068]: Merged extensions into '/usr'.
Dec 13 03:43:35.731106 systemd-fsck[1064]: fsck.fat 4.2 (2021-01-31)
Dec 13 03:43:35.731106 systemd-fsck[1064]: /dev/vda1: 789 files, 119291/258078 clusters
Dec 13 03:43:35.742457 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Dec 13 03:43:35.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:35.744899 systemd[1]: Mounting boot.mount...
Dec 13 03:43:35.745430 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 03:43:35.747546 systemd[1]: Mounting usr-share-oem.mount...
Dec 13 03:43:35.748236 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 03:43:35.755945 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 03:43:35.758464 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 03:43:35.760020 systemd[1]: Starting modprobe@loop.service...
Dec 13 03:43:35.760653 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 03:43:35.760803 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 03:43:35.760947 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 03:43:35.766999 systemd[1]: Mounted usr-share-oem.mount.
Dec 13 03:43:35.769411 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 03:43:35.769592 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 03:43:35.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:35.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:35.770507 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 03:43:35.770671 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 03:43:35.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:35.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:35.771593 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 03:43:35.771775 systemd[1]: Finished modprobe@loop.service.
Dec 13 03:43:35.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:35.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:35.778325 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 03:43:35.778457 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 03:43:35.780170 systemd[1]: Finished systemd-sysext.service.
Dec 13 03:43:35.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:35.784268 systemd[1]: Starting ensure-sysext.service...
Dec 13 03:43:35.785985 systemd[1]: Starting systemd-tmpfiles-setup.service...
Dec 13 03:43:35.798148 systemd[1]: Mounted boot.mount.
Dec 13 03:43:35.810960 systemd[1]: Reloading.
Dec 13 03:43:35.815453 systemd-tmpfiles[1087]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Dec 13 03:43:35.818505 systemd-tmpfiles[1087]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Dec 13 03:43:35.826352 systemd-tmpfiles[1087]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Dec 13 03:43:35.893315 /usr/lib/systemd/system-generators/torcx-generator[1108]: time="2024-12-13T03:43:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 03:43:35.893693 /usr/lib/systemd/system-generators/torcx-generator[1108]: time="2024-12-13T03:43:35Z" level=info msg="torcx already run"
Dec 13 03:43:36.044182 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 03:43:36.044786 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 03:43:36.078562 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 03:43:36.163264 systemd[1]: Finished systemd-boot-update.service.
Dec 13 03:43:36.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.166299 systemd[1]: Finished systemd-tmpfiles-setup.service.
Dec 13 03:43:36.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.169676 systemd[1]: Starting audit-rules.service...
Dec 13 03:43:36.172116 systemd[1]: Starting clean-ca-certificates.service...
Dec 13 03:43:36.175457 systemd[1]: Starting systemd-journal-catalog-update.service...
Dec 13 03:43:36.179215 systemd[1]: Starting systemd-resolved.service...
Dec 13 03:43:36.186353 systemd[1]: Starting systemd-timesyncd.service...
Dec 13 03:43:36.188093 systemd[1]: Starting systemd-update-utmp.service...
Dec 13 03:43:36.192103 systemd[1]: Finished clean-ca-certificates.service.
Dec 13 03:43:36.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.199905 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 03:43:36.202000 audit[1170]: SYSTEM_BOOT pid=1170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.205925 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 03:43:36.206224 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 03:43:36.208561 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 03:43:36.211449 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 03:43:36.213635 systemd[1]: Starting modprobe@loop.service...
Dec 13 03:43:36.215167 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 03:43:36.215372 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 03:43:36.216922 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 03:43:36.217105 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 03:43:36.219295 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 03:43:36.219471 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 03:43:36.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.222807 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 03:43:36.223282 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 03:43:36.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.224451 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 03:43:36.224920 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 03:43:36.225699 systemd[1]: Finished modprobe@loop.service.
Dec 13 03:43:36.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.231602 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 03:43:36.231879 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 03:43:36.233346 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 03:43:36.237022 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 03:43:36.239018 systemd[1]: Starting modprobe@loop.service...
Dec 13 03:43:36.241252 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 03:43:36.241461 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 03:43:36.242317 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 03:43:36.242430 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 03:43:36.244031 systemd[1]: Finished systemd-update-utmp.service.
Dec 13 03:43:36.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.247250 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 03:43:36.247433 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 03:43:36.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.249188 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 03:43:36.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.250296 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 03:43:36.252838 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 03:43:36.253052 systemd[1]: Finished modprobe@loop.service.
Dec 13 03:43:36.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.256138 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 03:43:36.256190 systemd-networkd[1020]: eth0: Gained IPv6LL
Dec 13 03:43:36.256262 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 03:43:36.260252 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 03:43:36.262866 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Dec 13 03:43:36.265183 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 03:43:36.269309 systemd[1]: Starting modprobe@drm.service...
Dec 13 03:43:36.270934 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 03:43:36.273858 systemd[1]: Starting modprobe@loop.service...
Dec 13 03:43:36.274674 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 03:43:36.274814 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 03:43:36.278722 systemd[1]: Starting systemd-networkd-wait-online.service...
Dec 13 03:43:36.280154 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 03:43:36.280292 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 03:43:36.284783 systemd[1]: Finished ensure-sysext.service.
Dec 13 03:43:36.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.285593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 03:43:36.285764 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 03:43:36.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.294455 systemd[1]: Finished systemd-networkd-wait-online.service.
Dec 13 03:43:36.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.295244 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 03:43:36.295409 systemd[1]: Finished modprobe@drm.service.
Dec 13 03:43:36.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.296078 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 03:43:36.296228 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 03:43:36.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.297703 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 03:43:36.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.297932 systemd[1]: Finished modprobe@loop.service.
Dec 13 03:43:36.298502 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 03:43:36.298579 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 03:43:36.316013 systemd[1]: Finished systemd-journal-catalog-update.service.
Dec 13 03:43:36.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:43:36.341000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Dec 13 03:43:36.341000 audit[1213]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff992a0200 a2=420 a3=0 items=0 ppid=1164 pid=1213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:43:36.341000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Dec 13 03:43:36.342296 augenrules[1213]: No rules
Dec 13 03:43:36.342789 systemd[1]: Finished audit-rules.service.
Dec 13 03:43:36.367603 systemd[1]: Started systemd-timesyncd.service.
Dec 13 03:43:36.368626 systemd[1]: Reached target time-set.target.
Dec 13 03:43:37.647640 systemd-timesyncd[1169]: Contacted time server 129.250.35.250:123 (0.flatcar.pool.ntp.org).
Dec 13 03:43:37.649300 systemd-timesyncd[1169]: Initial clock synchronization to Fri 2024-12-13 03:43:37.647283 UTC.
Dec 13 03:43:37.671485 systemd-resolved[1167]: Positive Trust Anchors:
Dec 13 03:43:37.671509 systemd-resolved[1167]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 03:43:37.671568 systemd-resolved[1167]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Dec 13 03:43:37.679775 systemd-resolved[1167]: Using system hostname 'ci-3510-3-6-b-896f86a818.novalocal'.
Dec 13 03:43:37.681399 systemd[1]: Started systemd-resolved.service.
Dec 13 03:43:37.681996 systemd[1]: Reached target network.target.
Dec 13 03:43:37.682464 systemd[1]: Reached target network-online.target.
Dec 13 03:43:37.682913 systemd[1]: Reached target nss-lookup.target.
Dec 13 03:43:37.683525 ldconfig[1051]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Dec 13 03:43:37.707587 systemd[1]: Finished ldconfig.service.
Dec 13 03:43:37.709598 systemd[1]: Starting systemd-update-done.service...
Dec 13 03:43:37.718983 systemd[1]: Finished systemd-update-done.service.
Dec 13 03:43:37.719650 systemd[1]: Reached target sysinit.target.
Dec 13 03:43:37.720176 systemd[1]: Started motdgen.path.
Dec 13 03:43:37.720607 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Dec 13 03:43:37.721219 systemd[1]: Started logrotate.timer.
Dec 13 03:43:37.721743 systemd[1]: Started mdadm.timer.
Dec 13 03:43:37.722171 systemd[1]: Started systemd-tmpfiles-clean.timer.
Dec 13 03:43:37.722679 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Dec 13 03:43:37.722717 systemd[1]: Reached target paths.target.
Dec 13 03:43:37.723123 systemd[1]: Reached target timers.target.
Dec 13 03:43:37.723877 systemd[1]: Listening on dbus.socket.
Dec 13 03:43:37.725722 systemd[1]: Starting docker.socket...
Dec 13 03:43:37.728817 systemd[1]: Listening on sshd.socket.
Dec 13 03:43:37.729323 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 03:43:37.729920 systemd[1]: Listening on docker.socket.
Dec 13 03:43:37.730761 systemd[1]: Reached target sockets.target.
Dec 13 03:43:37.731205 systemd[1]: Reached target basic.target.
Dec 13 03:43:37.731792 systemd[1]: System is tainted: cgroupsv1
Dec 13 03:43:37.731844 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Dec 13 03:43:37.731868 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Dec 13 03:43:37.732999 systemd[1]: Starting containerd.service...
Dec 13 03:43:37.734545 systemd[1]: Starting coreos-metadata-sshkeys@core.service...
Dec 13 03:43:37.738190 systemd[1]: Starting dbus.service...
Dec 13 03:43:37.740290 systemd[1]: Starting enable-oem-cloudinit.service...
Dec 13 03:43:37.742036 systemd[1]: Starting extend-filesystems.service...
Dec 13 03:43:37.744436 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Dec 13 03:43:37.807722 jq[1230]: false
Dec 13 03:43:37.748459 systemd[1]: Starting kubelet.service...
Dec 13 03:43:37.752273 systemd[1]: Starting motdgen.service...
Dec 13 03:43:37.765506 systemd[1]: Starting prepare-helm.service...
Dec 13 03:43:37.771737 systemd[1]: Starting ssh-key-proc-cmdline.service...
Dec 13 03:43:37.774662 systemd[1]: Starting sshd-keygen.service...
Dec 13 03:43:37.778486 systemd[1]: Starting systemd-logind.service...
Dec 13 03:43:37.780653 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 03:43:37.780725 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Dec 13 03:43:37.809080 jq[1243]: true
Dec 13 03:43:37.783402 systemd[1]: Starting update-engine.service...
Dec 13 03:43:37.786308 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Dec 13 03:43:37.798812 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Dec 13 03:43:37.799079 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Dec 13 03:43:37.799436 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Dec 13 03:43:37.799639 systemd[1]: Finished ssh-key-proc-cmdline.service.
Dec 13 03:43:37.832370 jq[1249]: true
Dec 13 03:43:37.839548 tar[1247]: linux-amd64/helm
Dec 13 03:43:37.855180 extend-filesystems[1231]: Found loop1
Dec 13 03:43:37.860274 extend-filesystems[1231]: Found vda
Dec 13 03:43:37.860870 extend-filesystems[1231]: Found vda1
Dec 13 03:43:37.861474 extend-filesystems[1231]: Found vda2
Dec 13 03:43:37.861987 extend-filesystems[1231]: Found vda3
Dec 13 03:43:37.862935 extend-filesystems[1231]: Found usr
Dec 13 03:43:37.863526 extend-filesystems[1231]: Found vda4
Dec 13 03:43:37.864029 extend-filesystems[1231]: Found vda6
Dec 13 03:43:37.865097 extend-filesystems[1231]: Found vda7
Dec 13 03:43:37.865097 extend-filesystems[1231]: Found vda9
Dec 13 03:43:37.869099 extend-filesystems[1231]: Checking size of /dev/vda9
Dec 13 03:43:37.894120 systemd[1]: motdgen.service: Deactivated successfully.
Dec 13 03:43:37.894389 systemd[1]: Finished motdgen.service.
Dec 13 03:43:37.900243 extend-filesystems[1231]: Resized partition /dev/vda9
Dec 13 03:43:37.942597 extend-filesystems[1291]: resize2fs 1.46.5 (30-Dec-2021)
Dec 13 03:43:37.969908 env[1255]: time="2024-12-13T03:43:37.969835914Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Dec 13 03:43:38.021355 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks
Dec 13 03:43:38.025575 dbus-daemon[1228]: [system] SELinux support is enabled
Dec 13 03:43:38.025836 systemd[1]: Started dbus.service.
Dec 13 03:43:38.028313 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Dec 13 03:43:38.028359 systemd[1]: Reached target system-config.target.
Dec 13 03:43:38.028856 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Dec 13 03:43:38.028872 systemd[1]: Reached target user-config.target.
Dec 13 03:43:38.031110 systemd-logind[1241]: Watching system buttons on /dev/input/event1 (Power Button)
Dec 13 03:43:38.031133 systemd-logind[1241]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Dec 13 03:43:38.032162 systemd-logind[1241]: New seat seat0.
Dec 13 03:43:38.033776 systemd[1]: Started systemd-logind.service.
Dec 13 03:43:38.044360 update_engine[1242]: I1213 03:43:38.039654  1242 main.cc:92] Flatcar Update Engine starting
Dec 13 03:43:38.054965 env[1255]: time="2024-12-13T03:43:38.054903588Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Dec 13 03:43:38.057500 env[1255]: time="2024-12-13T03:43:38.057481403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Dec 13 03:43:38.061962 systemd[1]: Started update-engine.service.
Dec 13 03:43:38.240827 update_engine[1242]: I1213 03:43:38.062033  1242 update_check_scheduler.cc:74] Next update check in 5m43s
Dec 13 03:43:38.244324 env[1255]: time="2024-12-13T03:43:38.062429433Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Dec 13 03:43:38.244324 env[1255]: time="2024-12-13T03:43:38.062460772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Dec 13 03:43:38.244324 env[1255]: time="2024-12-13T03:43:38.241587494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 03:43:38.244324 env[1255]: time="2024-12-13T03:43:38.241634832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Dec 13 03:43:38.244324 env[1255]: time="2024-12-13T03:43:38.241659268Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Dec 13 03:43:38.244324 env[1255]: time="2024-12-13T03:43:38.241672423Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Dec 13 03:43:38.244324 env[1255]: time="2024-12-13T03:43:38.241778933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Dec 13 03:43:38.244324 env[1255]: time="2024-12-13T03:43:38.242095557Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Dec 13 03:43:38.244324 env[1255]: time="2024-12-13T03:43:38.242263361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 03:43:38.244324 env[1255]: time="2024-12-13T03:43:38.242285423Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Dec 13 03:43:38.064620 systemd[1]: Started locksmithd.service.
Dec 13 03:43:38.244803 env[1255]: time="2024-12-13T03:43:38.243071968Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Dec 13 03:43:38.244803 env[1255]: time="2024-12-13T03:43:38.243094821Z" level=info msg="metadata content store policy set" policy=shared
Dec 13 03:43:38.287361 kernel: EXT4-fs (vda9): resized filesystem to 4635643
Dec 13 03:43:38.302132 bash[1290]: Updated "/home/core/.ssh/authorized_keys"
Dec 13 03:43:38.447611 extend-filesystems[1291]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Dec 13 03:43:38.447611 extend-filesystems[1291]: old_desc_blocks = 1, new_desc_blocks = 3
Dec 13 03:43:38.447611 extend-filesystems[1291]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long.
Dec 13 03:43:38.303414 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Dec 13 03:43:38.515174 env[1255]: time="2024-12-13T03:43:38.477935144Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Dec 13 03:43:38.515174 env[1255]: time="2024-12-13T03:43:38.478019301Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Dec 13 03:43:38.515174 env[1255]: time="2024-12-13T03:43:38.478058244Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Dec 13 03:43:38.515174 env[1255]: time="2024-12-13T03:43:38.478177198Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Dec 13 03:43:38.515174 env[1255]: time="2024-12-13T03:43:38.478369959Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Dec 13 03:43:38.515174 env[1255]: time="2024-12-13T03:43:38.478415444Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Dec 13 03:43:38.515174 env[1255]: time="2024-12-13T03:43:38.478451552Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Dec 13 03:43:38.515174 env[1255]: time="2024-12-13T03:43:38.478486528Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Dec 13 03:43:38.515174 env[1255]: time="2024-12-13T03:43:38.478520011Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Dec 13 03:43:38.515174 env[1255]: time="2024-12-13T03:43:38.478553113Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Dec 13 03:43:38.515174 env[1255]: time="2024-12-13T03:43:38.478586165Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Dec 13 03:43:38.515174 env[1255]: time="2024-12-13T03:43:38.478619317Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Dec 13 03:43:38.515174 env[1255]: time="2024-12-13T03:43:38.478865609Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Dec 13 03:43:38.515174 env[1255]: time="2024-12-13T03:43:38.479065133Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Dec 13 03:43:38.515982 extend-filesystems[1231]: Resized filesystem in /dev/vda9
Dec 13 03:43:38.452728 systemd[1]: extend-filesystems.service: Deactivated successfully.
Dec 13 03:43:38.522565 env[1255]: time="2024-12-13T03:43:38.480275944Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Dec 13 03:43:38.522565 env[1255]: time="2024-12-13T03:43:38.480381522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Dec 13 03:43:38.522565 env[1255]: time="2024-12-13T03:43:38.480419733Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Dec 13 03:43:38.522565 env[1255]: time="2024-12-13T03:43:38.480507518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Dec 13 03:43:38.522565 env[1255]: time="2024-12-13T03:43:38.480544187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Dec 13 03:43:38.522565 env[1255]: time="2024-12-13T03:43:38.480578842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Dec 13 03:43:38.522565 env[1255]: time="2024-12-13T03:43:38.480609018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Dec 13 03:43:38.522565 env[1255]: time="2024-12-13T03:43:38.480640427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Dec 13 03:43:38.522565 env[1255]: time="2024-12-13T03:43:38.480671205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Dec 13 03:43:38.522565 env[1255]: time="2024-12-13T03:43:38.480705419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Dec 13 03:43:38.522565 env[1255]: time="2024-12-13T03:43:38.480736437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Dec 13 03:43:38.522565 env[1255]: time="2024-12-13T03:43:38.480771954Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Dec 13 03:43:38.522565 env[1255]: time="2024-12-13T03:43:38.481250301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Dec 13 03:43:38.522565 env[1255]: time="2024-12-13T03:43:38.481296728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Dec 13 03:43:38.522565 env[1255]: time="2024-12-13T03:43:38.481329069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Dec 13 03:43:38.453018 systemd[1]: Finished extend-filesystems.service.
Dec 13 03:43:38.523026 env[1255]: time="2024-12-13T03:43:38.489456692Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Dec 13 03:43:38.523026 env[1255]: time="2024-12-13T03:43:38.489502999Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Dec 13 03:43:38.523026 env[1255]: time="2024-12-13T03:43:38.489535530Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Dec 13 03:43:38.523026 env[1255]: time="2024-12-13T03:43:38.489585905Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Dec 13 03:43:38.523026 env[1255]: time="2024-12-13T03:43:38.489669992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Dec 13 03:43:38.506595 systemd[1]: Started containerd.service.
Dec 13 03:43:38.523241 env[1255]: time="2024-12-13T03:43:38.490170020Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Dec 13 03:43:38.523241 env[1255]: time="2024-12-13T03:43:38.490329249Z" level=info msg="Connect containerd service"
Dec 13 03:43:38.523241 env[1255]: time="2024-12-13T03:43:38.490436740Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Dec 13 03:43:38.523241 env[1255]: time="2024-12-13T03:43:38.495396312Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 03:43:38.523241 env[1255]: time="2024-12-13T03:43:38.495738624Z" level=info msg="Start subscribing containerd event"
Dec 13 03:43:38.523241 env[1255]: time="2024-12-13T03:43:38.495819896Z" level=info msg="Start recovering state"
Dec 13 03:43:38.523241 env[1255]: time="2024-12-13T03:43:38.495940292Z" level=info msg="Start event monitor"
Dec 13 03:43:38.523241 env[1255]: time="2024-12-13T03:43:38.495982131Z" level=info msg="Start snapshots syncer"
Dec 13 03:43:38.523241 env[1255]: time="2024-12-13T03:43:38.496030081Z" level=info msg="Start cni network conf syncer for default"
Dec 13 03:43:38.523241 env[1255]: time="2024-12-13T03:43:38.496049367Z" level=info msg="Start streaming server"
Dec 13 03:43:38.523241 env[1255]: time="2024-12-13T03:43:38.506207378Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 13 03:43:38.523241 env[1255]: time="2024-12-13T03:43:38.506324287Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 13 03:43:38.523241 env[1255]: time="2024-12-13T03:43:38.506492713Z" level=info msg="containerd successfully booted in 0.537531s"
Dec 13 03:43:38.588388 locksmithd[1307]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Dec 13 03:43:38.995029 tar[1247]: linux-amd64/LICENSE
Dec 13 03:43:38.995429 tar[1247]: linux-amd64/README.md
Dec 13 03:43:39.003701 systemd[1]: Finished prepare-helm.service.
Dec 13 03:43:39.739536 sshd_keygen[1261]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Dec 13 03:43:39.765609 systemd[1]: Finished sshd-keygen.service.
Dec 13 03:43:39.767666 systemd[1]: Starting issuegen.service...
Dec 13 03:43:39.774669 systemd[1]: issuegen.service: Deactivated successfully.
Dec 13 03:43:39.774896 systemd[1]: Finished issuegen.service.
Dec 13 03:43:39.776924 systemd[1]: Starting systemd-user-sessions.service...
Dec 13 03:43:39.783973 systemd[1]: Finished systemd-user-sessions.service.
Dec 13 03:43:39.785824 systemd[1]: Started getty@tty1.service.
Dec 13 03:43:39.787990 systemd[1]: Started serial-getty@ttyS0.service.
Dec 13 03:43:39.789661 systemd[1]: Reached target getty.target.
Dec 13 03:43:39.908081 systemd[1]: Started kubelet.service.
Dec 13 03:43:41.606555 kubelet[1341]: E1213 03:43:41.606390    1341 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 03:43:41.608656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 03:43:41.609012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 03:43:45.131774 coreos-metadata[1226]: Dec 13 03:43:45.131 WARN failed to locate config-drive, using the metadata service API instead
Dec 13 03:43:45.224520 coreos-metadata[1226]: Dec 13 03:43:45.224 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1
Dec 13 03:43:45.442955 coreos-metadata[1226]: Dec 13 03:43:45.442 INFO Fetch successful
Dec 13 03:43:45.443261 coreos-metadata[1226]: Dec 13 03:43:45.443 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1
Dec 13 03:43:45.457894 coreos-metadata[1226]: Dec 13 03:43:45.457 INFO Fetch successful
Dec 13 03:43:45.462896 unknown[1226]: wrote ssh authorized keys file for user: core
Dec 13 03:43:45.491699 update-ssh-keys[1352]: Updated "/home/core/.ssh/authorized_keys"
Dec 13 03:43:45.492559 systemd[1]: Finished coreos-metadata-sshkeys@core.service.
Dec 13 03:43:45.493280 systemd[1]: Reached target multi-user.target.
Dec 13 03:43:45.496452 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Dec 13 03:43:45.515407 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 13 03:43:45.515894 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Dec 13 03:43:45.516236 systemd[1]: Startup finished in 9.142s (kernel) + 16.336s (userspace) = 25.478s.
Dec 13 03:43:47.582459 systemd[1]: Created slice system-sshd.slice.
Dec 13 03:43:47.585967 systemd[1]: Started sshd@0-172.24.4.219:22-172.24.4.1:37212.service.
Dec 13 03:43:49.057512 sshd[1357]: Accepted publickey for core from 172.24.4.1 port 37212 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:43:49.062035 sshd[1357]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:43:49.096003 systemd-logind[1241]: New session 1 of user core.
Dec 13 03:43:49.100152 systemd[1]: Created slice user-500.slice.
Dec 13 03:43:49.103438 systemd[1]: Starting user-runtime-dir@500.service...
Dec 13 03:43:49.132384 systemd[1]: Finished user-runtime-dir@500.service.
Dec 13 03:43:49.137448 systemd[1]: Starting user@500.service...
Dec 13 03:43:49.152069 (systemd)[1362]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:43:49.242724 systemd[1362]: Queued start job for default target default.target.
Dec 13 03:43:49.243090 systemd[1362]: Reached target paths.target.
Dec 13 03:43:49.243120 systemd[1362]: Reached target sockets.target.
Dec 13 03:43:49.243143 systemd[1362]: Reached target timers.target.
Dec 13 03:43:49.243165 systemd[1362]: Reached target basic.target.
Dec 13 03:43:49.243234 systemd[1362]: Reached target default.target.
Dec 13 03:43:49.243284 systemd[1362]: Startup finished in 81ms.
Dec 13 03:43:49.243695 systemd[1]: Started user@500.service.
Dec 13 03:43:49.245005 systemd[1]: Started session-1.scope.
Dec 13 03:43:49.670434 systemd[1]: Started sshd@1-172.24.4.219:22-172.24.4.1:37222.service.
Dec 13 03:43:51.331676 sshd[1371]: Accepted publickey for core from 172.24.4.1 port 37222 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:43:51.335446 sshd[1371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:43:51.347387 systemd-logind[1241]: New session 2 of user core.
Dec 13 03:43:51.349264 systemd[1]: Started session-2.scope.
Dec 13 03:43:51.787170 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Dec 13 03:43:51.787711 systemd[1]: Stopped kubelet.service.
Dec 13 03:43:51.791888 systemd[1]: Starting kubelet.service...
Dec 13 03:43:52.075136 sshd[1371]: pam_unix(sshd:session): session closed for user core
Dec 13 03:43:52.080420 systemd[1]: Started sshd@2-172.24.4.219:22-172.24.4.1:37228.service.
Dec 13 03:43:52.081969 systemd[1]: sshd@1-172.24.4.219:22-172.24.4.1:37222.service: Deactivated successfully.
Dec 13 03:43:52.083873 systemd-logind[1241]: Session 2 logged out. Waiting for processes to exit.
Dec 13 03:43:52.084065 systemd[1]: session-2.scope: Deactivated successfully.
Dec 13 03:43:52.100265 systemd-logind[1241]: Removed session 2.
Dec 13 03:43:52.102074 systemd[1]: Started kubelet.service.
Dec 13 03:43:52.176025 kubelet[1387]: E1213 03:43:52.175984    1387 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 03:43:52.179436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 03:43:52.179591 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 03:43:53.486319 sshd[1379]: Accepted publickey for core from 172.24.4.1 port 37228 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:43:53.489248 sshd[1379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:43:53.500728 systemd-logind[1241]: New session 3 of user core.
Dec 13 03:43:53.501909 systemd[1]: Started session-3.scope.
Dec 13 03:43:54.134484 sshd[1379]: pam_unix(sshd:session): session closed for user core
Dec 13 03:43:54.135646 systemd[1]: Started sshd@3-172.24.4.219:22-172.24.4.1:37236.service.
Dec 13 03:43:54.144252 systemd[1]: sshd@2-172.24.4.219:22-172.24.4.1:37228.service: Deactivated successfully.
Dec 13 03:43:54.149058 systemd-logind[1241]: Session 3 logged out. Waiting for processes to exit.
Dec 13 03:43:54.149209 systemd[1]: session-3.scope: Deactivated successfully.
Dec 13 03:43:54.154223 systemd-logind[1241]: Removed session 3.
Dec 13 03:43:55.404304 sshd[1398]: Accepted publickey for core from 172.24.4.1 port 37236 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:43:55.407308 sshd[1398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:43:55.417470 systemd-logind[1241]: New session 4 of user core.
Dec 13 03:43:55.419119 systemd[1]: Started session-4.scope.
Dec 13 03:43:56.279796 sshd[1398]: pam_unix(sshd:session): session closed for user core
Dec 13 03:43:56.286945 systemd[1]: Started sshd@4-172.24.4.219:22-172.24.4.1:59536.service.
Dec 13 03:43:56.289142 systemd[1]: sshd@3-172.24.4.219:22-172.24.4.1:37236.service: Deactivated successfully.
Dec 13 03:43:56.291946 systemd-logind[1241]: Session 4 logged out. Waiting for processes to exit.
Dec 13 03:43:56.293203 systemd[1]: session-4.scope: Deactivated successfully.
Dec 13 03:43:56.301521 systemd-logind[1241]: Removed session 4.
Dec 13 03:43:57.854612 sshd[1405]: Accepted publickey for core from 172.24.4.1 port 59536 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:43:57.857555 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:43:57.869224 systemd[1]: Started session-5.scope.
Dec 13 03:43:57.871060 systemd-logind[1241]: New session 5 of user core.
Dec 13 03:43:58.312005 sudo[1411]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Dec 13 03:43:58.313472 sudo[1411]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Dec 13 03:43:58.324632 dbus-daemon[1228]: н\xc3ɷU:  received setenforce notice (enforcing=-1266296528)
Dec 13 03:43:58.330654 sudo[1411]: pam_unix(sudo:session): session closed for user root
Dec 13 03:43:58.554647 sshd[1405]: pam_unix(sshd:session): session closed for user core
Dec 13 03:43:58.558595 systemd[1]: Started sshd@5-172.24.4.219:22-172.24.4.1:59538.service.
Dec 13 03:43:58.566504 systemd[1]: sshd@4-172.24.4.219:22-172.24.4.1:59536.service: Deactivated successfully.
Dec 13 03:43:58.571776 systemd[1]: session-5.scope: Deactivated successfully.
Dec 13 03:43:58.572504 systemd-logind[1241]: Session 5 logged out. Waiting for processes to exit.
Dec 13 03:43:58.580890 systemd-logind[1241]: Removed session 5.
Dec 13 03:43:59.761160 sshd[1413]: Accepted publickey for core from 172.24.4.1 port 59538 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:43:59.764061 sshd[1413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:43:59.774804 systemd-logind[1241]: New session 6 of user core.
Dec 13 03:43:59.775692 systemd[1]: Started session-6.scope.
Dec 13 03:44:00.200555 sudo[1420]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Dec 13 03:44:00.201210 sudo[1420]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Dec 13 03:44:00.208244 sudo[1420]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:00.219676 sudo[1419]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules
Dec 13 03:44:00.220935 sudo[1419]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Dec 13 03:44:00.244811 systemd[1]: Stopping audit-rules.service...
Dec 13 03:44:00.247000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1
Dec 13 03:44:00.250136 kernel: kauditd_printk_skb: 181 callbacks suppressed
Dec 13 03:44:00.250254 kernel: audit: type=1305 audit(1734061440.247:162): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1
Dec 13 03:44:00.255628 auditctl[1423]: No rules
Dec 13 03:44:00.247000 audit[1423]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe629e26a0 a2=420 a3=0 items=0 ppid=1 pid=1423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:00.256991 systemd[1]: audit-rules.service: Deactivated successfully.
Dec 13 03:44:00.257599 systemd[1]: Stopped audit-rules.service.
Dec 13 03:44:00.263062 systemd[1]: Starting audit-rules.service...
Dec 13 03:44:00.266410 kernel: audit: type=1300 audit(1734061440.247:162): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe629e26a0 a2=420 a3=0 items=0 ppid=1 pid=1423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:00.247000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44
Dec 13 03:44:00.276421 kernel: audit: type=1327 audit(1734061440.247:162): proctitle=2F7362696E2F617564697463746C002D44
Dec 13 03:44:00.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:00.287399 kernel: audit: type=1131 audit(1734061440.257:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:00.318705 augenrules[1441]: No rules
Dec 13 03:44:00.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:00.323314 sudo[1419]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:00.320840 systemd[1]: Finished audit-rules.service.
Dec 13 03:44:00.332492 kernel: audit: type=1130 audit(1734061440.320:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:00.322000 audit[1419]: USER_END pid=1419 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:00.322000 audit[1419]: CRED_DISP pid=1419 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:00.352379 kernel: audit: type=1106 audit(1734061440.322:165): pid=1419 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:00.352439 kernel: audit: type=1104 audit(1734061440.322:166): pid=1419 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:00.647940 sshd[1413]: pam_unix(sshd:session): session closed for user core
Dec 13 03:44:00.652989 systemd[1]: Started sshd@6-172.24.4.219:22-172.24.4.1:59540.service.
Dec 13 03:44:00.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.219:22-172.24.4.1:59540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:00.665389 kernel: audit: type=1130 audit(1734061440.652:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.219:22-172.24.4.1:59540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:00.669101 systemd[1]: sshd@5-172.24.4.219:22-172.24.4.1:59538.service: Deactivated successfully.
Dec 13 03:44:00.671056 systemd[1]: session-6.scope: Deactivated successfully.
Dec 13 03:44:00.665000 audit[1413]: USER_END pid=1413 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:44:00.674914 systemd-logind[1241]: Session 6 logged out. Waiting for processes to exit.
Dec 13 03:44:00.687790 kernel: audit: type=1106 audit(1734061440.665:168): pid=1413 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:44:00.665000 audit[1413]: CRED_DISP pid=1413 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:44:00.698476 systemd-logind[1241]: Removed session 6.
Dec 13 03:44:00.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.24.4.219:22-172.24.4.1:59538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:00.699416 kernel: audit: type=1104 audit(1734061440.665:169): pid=1413 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:44:01.979000 audit[1446]: USER_ACCT pid=1446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:44:01.980731 sshd[1446]: Accepted publickey for core from 172.24.4.1 port 59540 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:44:01.982000 audit[1446]: CRED_ACQ pid=1446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:44:01.982000 audit[1446]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe13a54650 a2=3 a3=0 items=0 ppid=1 pid=1446 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:01.982000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:44:01.984134 sshd[1446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:44:01.996000 systemd-logind[1241]: New session 7 of user core.
Dec 13 03:44:01.996191 systemd[1]: Started session-7.scope.
Dec 13 03:44:02.008000 audit[1446]: USER_START pid=1446 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:44:02.011000 audit[1451]: CRED_ACQ pid=1451 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:44:02.189273 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Dec 13 03:44:02.189818 systemd[1]: Stopped kubelet.service.
Dec 13 03:44:02.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:02.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:02.192936 systemd[1]: Starting kubelet.service...
Dec 13 03:44:02.454000 audit[1455]: USER_ACCT pid=1455 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:02.455528 sudo[1455]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Dec 13 03:44:02.455000 audit[1455]: CRED_REFR pid=1455 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:02.456168 sudo[1455]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Dec 13 03:44:02.460000 audit[1455]: USER_START pid=1455 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:02.568600 systemd[1]: Starting docker.service...
Dec 13 03:44:02.676661 env[1465]: time="2024-12-13T03:44:02.676576199Z" level=info msg="Starting up"
Dec 13 03:44:02.680698 env[1465]: time="2024-12-13T03:44:02.680618821Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Dec 13 03:44:02.680698 env[1465]: time="2024-12-13T03:44:02.680678122Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Dec 13 03:44:02.680896 env[1465]: time="2024-12-13T03:44:02.680719670Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Dec 13 03:44:02.680896 env[1465]: time="2024-12-13T03:44:02.680742132Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Dec 13 03:44:02.685644 env[1465]: time="2024-12-13T03:44:02.685585005Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Dec 13 03:44:02.685792 env[1465]: time="2024-12-13T03:44:02.685620922Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Dec 13 03:44:02.685792 env[1465]: time="2024-12-13T03:44:02.685677358Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Dec 13 03:44:02.685792 env[1465]: time="2024-12-13T03:44:02.685695141Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Dec 13 03:44:03.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:03.346908 systemd[1]: Started kubelet.service.
Dec 13 03:44:03.887259 kubelet[1476]: E1213 03:44:03.887034    1476 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 03:44:03.891979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 03:44:03.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Dec 13 03:44:03.892318 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 03:44:04.670588 env[1465]: time="2024-12-13T03:44:04.670512846Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Dec 13 03:44:04.671404 env[1465]: time="2024-12-13T03:44:04.671324709Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Dec 13 03:44:04.672204 env[1465]: time="2024-12-13T03:44:04.672112877Z" level=info msg="Loading containers: start."
Dec 13 03:44:05.138000 audit[1508]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1508 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.138000 audit[1508]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffebaf38680 a2=0 a3=7ffebaf3866c items=0 ppid=1465 pid=1508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.138000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552
Dec 13 03:44:05.143000 audit[1510]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1510 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.143000 audit[1510]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd5b9aa600 a2=0 a3=7ffd5b9aa5ec items=0 ppid=1465 pid=1510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.143000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552
Dec 13 03:44:05.148000 audit[1512]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1512 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.148000 audit[1512]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdde4755d0 a2=0 a3=7ffdde4755bc items=0 ppid=1465 pid=1512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.148000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31
Dec 13 03:44:05.152000 audit[1514]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1514 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.152000 audit[1514]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc5b395a60 a2=0 a3=7ffc5b395a4c items=0 ppid=1465 pid=1514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.152000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32
Dec 13 03:44:05.198000 audit[1516]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.319719 kernel: kauditd_printk_skb: 27 callbacks suppressed
Dec 13 03:44:05.319861 kernel: audit: type=1325 audit(1734061445.198:187): table=filter:6 family=2 entries=1 op=nft_register_rule pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.198000 audit[1516]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffdf2ad020 a2=0 a3=7fffdf2ad00c items=0 ppid=1465 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.339079 kernel: audit: type=1300 audit(1734061445.198:187): arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffdf2ad020 a2=0 a3=7fffdf2ad00c items=0 ppid=1465 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.339202 kernel: audit: type=1327 audit(1734061445.198:187): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E
Dec 13 03:44:05.198000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E
Dec 13 03:44:05.345992 kernel: audit: type=1325 audit(1734061445.326:188): table=filter:7 family=2 entries=1 op=nft_register_rule pid=1521 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.326000 audit[1521]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1521 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.326000 audit[1521]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd881e6e60 a2=0 a3=7ffd881e6e4c items=0 ppid=1465 pid=1521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.365449 kernel: audit: type=1300 audit(1734061445.326:188): arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd881e6e60 a2=0 a3=7ffd881e6e4c items=0 ppid=1465 pid=1521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.424204 kernel: audit: type=1327 audit(1734061445.326:188): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E
Dec 13 03:44:05.326000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E
Dec 13 03:44:05.712000 audit[1523]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1523 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.712000 audit[1523]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff24613100 a2=0 a3=7fff246130ec items=0 ppid=1465 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.734219 kernel: audit: type=1325 audit(1734061445.712:189): table=filter:8 family=2 entries=1 op=nft_register_chain pid=1523 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.734479 kernel: audit: type=1300 audit(1734061445.712:189): arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff24613100 a2=0 a3=7fff246130ec items=0 ppid=1465 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.734549 kernel: audit: type=1327 audit(1734061445.712:189): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552
Dec 13 03:44:05.712000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552
Dec 13 03:44:05.717000 audit[1525]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1525 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.746841 kernel: audit: type=1325 audit(1734061445.717:190): table=filter:9 family=2 entries=1 op=nft_register_rule pid=1525 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.717000 audit[1525]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcd0a6cff0 a2=0 a3=7ffcd0a6cfdc items=0 ppid=1465 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.717000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E
Dec 13 03:44:05.721000 audit[1527]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1527 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.721000 audit[1527]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffe20251430 a2=0 a3=7ffe2025141c items=0 ppid=1465 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.721000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552
Dec 13 03:44:05.740000 audit[1531]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.740000 audit[1531]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffae8f7490 a2=0 a3=7fffae8f747c items=0 ppid=1465 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.740000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552
Dec 13 03:44:05.752000 audit[1532]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.752000 audit[1532]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffea6c2df00 a2=0 a3=7ffea6c2deec items=0 ppid=1465 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.752000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552
Dec 13 03:44:05.778479 kernel: Initializing XFRM netlink socket
Dec 13 03:44:05.853561 env[1465]: time="2024-12-13T03:44:05.853486523Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Dec 13 03:44:05.899000 audit[1540]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.899000 audit[1540]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fffa27cf310 a2=0 a3=7fffa27cf2fc items=0 ppid=1465 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.899000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445
Dec 13 03:44:05.923000 audit[1543]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.923000 audit[1543]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe7c65e430 a2=0 a3=7ffe7c65e41c items=0 ppid=1465 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.923000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E
Dec 13 03:44:05.930000 audit[1546]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1546 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.930000 audit[1546]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffc2db34f0 a2=0 a3=7fffc2db34dc items=0 ppid=1465 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.930000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054
Dec 13 03:44:05.934000 audit[1548]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.934000 audit[1548]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffce21f8240 a2=0 a3=7ffce21f822c items=0 ppid=1465 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.934000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054
Dec 13 03:44:05.939000 audit[1550]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.939000 audit[1550]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffefd2c0400 a2=0 a3=7ffefd2c03ec items=0 ppid=1465 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.939000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552
Dec 13 03:44:05.943000 audit[1552]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.943000 audit[1552]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd3095da00 a2=0 a3=7ffd3095d9ec items=0 ppid=1465 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.943000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38
Dec 13 03:44:05.948000 audit[1554]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.948000 audit[1554]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffe048874e0 a2=0 a3=7ffe048874cc items=0 ppid=1465 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.948000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552
Dec 13 03:44:05.960000 audit[1557]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1557 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.960000 audit[1557]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fffda21ba00 a2=0 a3=7fffda21b9ec items=0 ppid=1465 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.960000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054
Dec 13 03:44:05.964000 audit[1559]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.964000 audit[1559]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffc9a185950 a2=0 a3=7ffc9a18593c items=0 ppid=1465 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.964000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31
Dec 13 03:44:05.967000 audit[1561]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1561 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.967000 audit[1561]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffdceb41890 a2=0 a3=7ffdceb4187c items=0 ppid=1465 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.967000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32
Dec 13 03:44:05.971000 audit[1563]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.971000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe7ed63b80 a2=0 a3=7ffe7ed63b6c items=0 ppid=1465 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.971000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50
Dec 13 03:44:05.972829 systemd-networkd[1020]: docker0: Link UP
Dec 13 03:44:05.985000 audit[1567]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.985000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcf0a375f0 a2=0 a3=7ffcf0a375dc items=0 ppid=1465 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.985000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552
Dec 13 03:44:05.991000 audit[1568]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:05.991000 audit[1568]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe5e76d210 a2=0 a3=7ffe5e76d1fc items=0 ppid=1465 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:05.991000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552
Dec 13 03:44:05.992727 env[1465]: time="2024-12-13T03:44:05.992684269Z" level=info msg="Loading containers: done."
Dec 13 03:44:06.025815 env[1465]: time="2024-12-13T03:44:06.025729966Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Dec 13 03:44:06.026375 env[1465]: time="2024-12-13T03:44:06.026307449Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23
Dec 13 03:44:06.026671 env[1465]: time="2024-12-13T03:44:06.026642227Z" level=info msg="Daemon has completed initialization"
Dec 13 03:44:06.066472 systemd[1]: Started docker.service.
Dec 13 03:44:06.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:06.087809 env[1465]: time="2024-12-13T03:44:06.087533478Z" level=info msg="API listen on /run/docker.sock"
Dec 13 03:44:08.853499 env[1255]: time="2024-12-13T03:44:08.853383539Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\""
Dec 13 03:44:09.760134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2782402504.mount: Deactivated successfully.
Dec 13 03:44:13.938864 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Dec 13 03:44:13.939103 systemd[1]: Stopped kubelet.service.
Dec 13 03:44:13.940834 systemd[1]: Starting kubelet.service...
Dec 13 03:44:13.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:13.941945 kernel: kauditd_printk_skb: 51 callbacks suppressed
Dec 13 03:44:13.942117 kernel: audit: type=1130 audit(1734061453.937:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:13.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:13.948280 kernel: audit: type=1131 audit(1734061453.937:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:14.307570 systemd[1]: Started kubelet.service.
Dec 13 03:44:14.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:14.316397 kernel: audit: type=1130 audit(1734061454.306:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:15.200142 env[1255]: time="2024-12-13T03:44:15.199943295Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:15.219052 env[1255]: time="2024-12-13T03:44:15.218980188Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:15.225912 env[1255]: time="2024-12-13T03:44:15.225401125Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:15.231916 env[1255]: time="2024-12-13T03:44:15.231855715Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:15.232277 env[1255]: time="2024-12-13T03:44:15.232234402Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\""
Dec 13 03:44:15.256607 env[1255]: time="2024-12-13T03:44:15.256518811Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\""
Dec 13 03:44:15.283666 kubelet[1615]: E1213 03:44:15.283571    1615 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 03:44:15.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Dec 13 03:44:15.286798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 03:44:15.287111 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 03:44:15.296365 kernel: audit: type=1131 audit(1734061455.286:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Dec 13 03:44:19.064495 env[1255]: time="2024-12-13T03:44:19.064296008Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:19.071887 env[1255]: time="2024-12-13T03:44:19.071818781Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:19.077807 env[1255]: time="2024-12-13T03:44:19.075567143Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:19.083430 env[1255]: time="2024-12-13T03:44:19.083220933Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\""
Dec 13 03:44:19.086395 env[1255]: time="2024-12-13T03:44:19.080877505Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:19.108241 env[1255]: time="2024-12-13T03:44:19.108164589Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\""
Dec 13 03:44:21.550751 env[1255]: time="2024-12-13T03:44:21.550616600Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:21.555825 env[1255]: time="2024-12-13T03:44:21.555735483Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:21.559460 env[1255]: time="2024-12-13T03:44:21.559377367Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:21.563412 env[1255]: time="2024-12-13T03:44:21.563326392Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:21.565723 env[1255]: time="2024-12-13T03:44:21.565659595Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\""
Dec 13 03:44:21.580089 env[1255]: time="2024-12-13T03:44:21.580018941Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\""
Dec 13 03:44:23.057247 update_engine[1242]: I1213 03:44:23.056279  1242 update_attempter.cc:509] Updating boot flags...
Dec 13 03:44:24.227825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1985217921.mount: Deactivated successfully.
Dec 13 03:44:25.438826 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
Dec 13 03:44:25.439052 systemd[1]: Stopped kubelet.service.
Dec 13 03:44:25.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:25.440643 systemd[1]: Starting kubelet.service...
Dec 13 03:44:25.445637 kernel: audit: type=1130 audit(1734061465.437:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:25.445734 kernel: audit: type=1131 audit(1734061465.437:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:25.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:26.927001 systemd[1]: Started kubelet.service.
Dec 13 03:44:26.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:26.937503 kernel: audit: type=1130 audit(1734061466.926:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:27.006608 kubelet[1661]: E1213 03:44:27.006533    1661 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 03:44:27.010232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 03:44:27.010418 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 03:44:27.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Dec 13 03:44:27.015394 kernel: audit: type=1131 audit(1734061467.009:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Dec 13 03:44:27.252934 env[1255]: time="2024-12-13T03:44:27.252786239Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:27.260936 env[1255]: time="2024-12-13T03:44:27.260860589Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:27.267966 env[1255]: time="2024-12-13T03:44:27.267896213Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:27.273790 env[1255]: time="2024-12-13T03:44:27.273712860Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:27.275416 env[1255]: time="2024-12-13T03:44:27.275309868Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\""
Dec 13 03:44:27.297060 env[1255]: time="2024-12-13T03:44:27.296969989Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Dec 13 03:44:27.918315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3969795214.mount: Deactivated successfully.
Dec 13 03:44:30.807605 env[1255]: time="2024-12-13T03:44:30.807276324Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:30.815206 env[1255]: time="2024-12-13T03:44:30.815102625Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:30.824265 env[1255]: time="2024-12-13T03:44:30.824146188Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:30.829539 env[1255]: time="2024-12-13T03:44:30.829442757Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:30.831076 env[1255]: time="2024-12-13T03:44:30.830999718Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\""
Dec 13 03:44:30.856995 env[1255]: time="2024-12-13T03:44:30.856933082Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Dec 13 03:44:31.420927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3775788915.mount: Deactivated successfully.
Dec 13 03:44:31.434917 env[1255]: time="2024-12-13T03:44:31.434854208Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:31.440005 env[1255]: time="2024-12-13T03:44:31.439947373Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:31.445467 env[1255]: time="2024-12-13T03:44:31.445417406Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:31.450215 env[1255]: time="2024-12-13T03:44:31.450165372Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:31.453378 env[1255]: time="2024-12-13T03:44:31.452033297Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\""
Dec 13 03:44:31.469625 env[1255]: time="2024-12-13T03:44:31.469532808Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\""
Dec 13 03:44:32.093679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119470387.mount: Deactivated successfully.
Dec 13 03:44:37.189575 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
Dec 13 03:44:37.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:37.190230 systemd[1]: Stopped kubelet.service.
Dec 13 03:44:37.199395 kernel: audit: type=1130 audit(1734061477.188:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:37.199974 systemd[1]: Starting kubelet.service...
Dec 13 03:44:37.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:37.209418 kernel: audit: type=1131 audit(1734061477.188:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:38.506317 env[1255]: time="2024-12-13T03:44:38.506190131Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:38.533519 env[1255]: time="2024-12-13T03:44:38.531844270Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:38.543995 env[1255]: time="2024-12-13T03:44:38.543893337Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:38.560795 env[1255]: time="2024-12-13T03:44:38.560697701Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:38.562578 env[1255]: time="2024-12-13T03:44:38.562237425Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\""
Dec 13 03:44:39.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:39.271629 systemd[1]: Started kubelet.service.
Dec 13 03:44:39.282548 kernel: audit: type=1130 audit(1734061479.270:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:39.388439 kubelet[1701]: E1213 03:44:39.388378    1701 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 03:44:39.391679 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 03:44:39.391855 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 03:44:39.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Dec 13 03:44:39.396380 kernel: audit: type=1131 audit(1734061479.390:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Dec 13 03:44:43.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:43.651088 systemd[1]: Stopped kubelet.service.
Dec 13 03:44:43.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:43.656866 systemd[1]: Starting kubelet.service...
Dec 13 03:44:43.661655 kernel: audit: type=1130 audit(1734061483.649:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:43.661740 kernel: audit: type=1131 audit(1734061483.650:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:43.698461 systemd[1]: Reloading.
Dec 13 03:44:43.813556 /usr/lib/systemd/system-generators/torcx-generator[1790]: time="2024-12-13T03:44:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 03:44:43.813933 /usr/lib/systemd/system-generators/torcx-generator[1790]: time="2024-12-13T03:44:43Z" level=info msg="torcx already run"
Dec 13 03:44:44.280972 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 03:44:44.281005 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 03:44:44.306478 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 03:44:44.410743 systemd[1]: Started kubelet.service.
Dec 13 03:44:44.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:44.420383 kernel: audit: type=1130 audit(1734061484.409:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:44.422539 systemd[1]: Stopping kubelet.service...
Dec 13 03:44:44.426597 systemd[1]: kubelet.service: Deactivated successfully.
Dec 13 03:44:44.426847 systemd[1]: Stopped kubelet.service.
Dec 13 03:44:44.435460 kernel: audit: type=1131 audit(1734061484.425:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:44.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:44.428557 systemd[1]: Starting kubelet.service...
Dec 13 03:44:44.947804 systemd[1]: Started kubelet.service.
Dec 13 03:44:44.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:44.963406 kernel: audit: type=1130 audit(1734061484.947:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:45.105631 kubelet[1860]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 03:44:45.107981 kubelet[1860]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Dec 13 03:44:45.108222 kubelet[1860]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 03:44:45.110898 kubelet[1860]: I1213 03:44:45.108985    1860 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Dec 13 03:44:45.997551 kubelet[1860]: I1213 03:44:45.997506    1860 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Dec 13 03:44:45.997551 kubelet[1860]: I1213 03:44:45.997558    1860 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 13 03:44:45.998008 kubelet[1860]: I1213 03:44:45.997947    1860 server.go:919] "Client rotation is on, will bootstrap in background"
Dec 13 03:44:46.045776 kubelet[1860]: E1213 03:44:46.045751    1860 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.219:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:46.058086 kubelet[1860]: I1213 03:44:46.058052    1860 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 03:44:46.076095 kubelet[1860]: I1213 03:44:46.076061    1860 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Dec 13 03:44:46.076935 kubelet[1860]: I1213 03:44:46.076899    1860 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 13 03:44:46.077415 kubelet[1860]: I1213 03:44:46.077380    1860 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Dec 13 03:44:46.077562 kubelet[1860]: I1213 03:44:46.077442    1860 topology_manager.go:138] "Creating topology manager with none policy"
Dec 13 03:44:46.077562 kubelet[1860]: I1213 03:44:46.077469    1860 container_manager_linux.go:301] "Creating device plugin manager"
Dec 13 03:44:46.082432 kubelet[1860]: I1213 03:44:46.082386    1860 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 03:44:46.082695 kubelet[1860]: I1213 03:44:46.082669    1860 kubelet.go:396] "Attempting to sync node with API server"
Dec 13 03:44:46.082778 kubelet[1860]: I1213 03:44:46.082728    1860 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Dec 13 03:44:46.082824 kubelet[1860]: I1213 03:44:46.082797    1860 kubelet.go:312] "Adding apiserver pod source"
Dec 13 03:44:46.082857 kubelet[1860]: I1213 03:44:46.082846    1860 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Dec 13 03:44:46.098622 kubelet[1860]: I1213 03:44:46.098595    1860 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Dec 13 03:44:46.099304 kubelet[1860]: W1213 03:44:46.099203    1860 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.219:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:46.099419 kubelet[1860]: E1213 03:44:46.099382    1860 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.219:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:46.099699 kubelet[1860]: W1213 03:44:46.099604    1860 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-b-896f86a818.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:46.099790 kubelet[1860]: E1213 03:44:46.099741    1860 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-b-896f86a818.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:46.103404 kubelet[1860]: I1213 03:44:46.103373    1860 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Dec 13 03:44:46.107296 kubelet[1860]: W1213 03:44:46.107259    1860 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Dec 13 03:44:46.108302 kubelet[1860]: I1213 03:44:46.108283    1860 server.go:1256] "Started kubelet"
Dec 13 03:44:46.108000 audit[1860]: AVC avc:  denied  { mac_admin } for  pid=1860 comm="kubelet" capability=33  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:44:46.112140 kubelet[1860]: I1213 03:44:46.111088    1860 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument"
Dec 13 03:44:46.112140 kubelet[1860]: I1213 03:44:46.111205    1860 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument"
Dec 13 03:44:46.112140 kubelet[1860]: I1213 03:44:46.111471    1860 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Dec 13 03:44:46.114392 kernel: audit: type=1400 audit(1734061486.108:225): avc:  denied  { mac_admin } for  pid=1860 comm="kubelet" capability=33  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:44:46.114546 kernel: audit: type=1401 audit(1734061486.108:225): op=setxattr invalid_context="system_u:object_r:container_file_t:s0"
Dec 13 03:44:46.108000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0"
Dec 13 03:44:46.108000 audit[1860]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a52420 a1=c0009eade0 a2=c000a523f0 a3=25 items=0 ppid=1 pid=1860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:46.122940 kernel: audit: type=1300 audit(1734061486.108:225): arch=c000003e syscall=188 success=no exit=-22 a0=c000a52420 a1=c0009eade0 a2=c000a523f0 a3=25 items=0 ppid=1 pid=1860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:46.123133 kernel: audit: type=1327 audit(1734061486.108:225): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669
Dec 13 03:44:46.108000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669
Dec 13 03:44:46.125248 kubelet[1860]: E1213 03:44:46.125202    1860 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Dec 13 03:44:46.126545 kubelet[1860]: I1213 03:44:46.126441    1860 volume_manager.go:291] "Starting Kubelet Volume Manager"
Dec 13 03:44:46.127999 kubelet[1860]: I1213 03:44:46.127947    1860 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Dec 13 03:44:46.128940 kubelet[1860]: I1213 03:44:46.128889    1860 server.go:461] "Adding debug handlers to kubelet server"
Dec 13 03:44:46.129468 kubelet[1860]: I1213 03:44:46.129426    1860 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Dec 13 03:44:46.129864 kubelet[1860]: I1213 03:44:46.129830    1860 reconciler_new.go:29] "Reconciler: start to sync state"
Dec 13 03:44:46.130123 kubelet[1860]: I1213 03:44:46.130022    1860 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Dec 13 03:44:46.108000 audit[1860]: AVC avc:  denied  { mac_admin } for  pid=1860 comm="kubelet" capability=33  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:44:46.130967 kubelet[1860]: I1213 03:44:46.130927    1860 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Dec 13 03:44:46.108000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0"
Dec 13 03:44:46.108000 audit[1860]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009d57a0 a1=c0009eadf8 a2=c000a524b0 a3=25 items=0 ppid=1 pid=1860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:46.108000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669
Dec 13 03:44:46.135457 kernel: audit: type=1400 audit(1734061486.108:226): avc:  denied  { mac_admin } for  pid=1860 comm="kubelet" capability=33  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:44:46.141386 kubelet[1860]: E1213 03:44:46.140286    1860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-b-896f86a818.novalocal?timeout=10s\": dial tcp 172.24.4.219:6443: connect: connection refused" interval="200ms"
Dec 13 03:44:46.141386 kubelet[1860]: W1213 03:44:46.140572    1860 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:46.141386 kubelet[1860]: E1213 03:44:46.140611    1860 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:46.140000 audit[1872]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1872 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:46.140000 audit[1872]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffddd554410 a2=0 a3=7ffddd5543fc items=0 ppid=1860 pid=1872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:46.140000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65
Dec 13 03:44:46.141000 audit[1873]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1873 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:46.141000 audit[1873]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff52a955a0 a2=0 a3=7fff52a9558c items=0 ppid=1860 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:46.141000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572
Dec 13 03:44:46.145182 kubelet[1860]: E1213 03:44:46.145113    1860 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.219:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.219:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-6-b-896f86a818.novalocal.18109fbbf22b2fe1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-b-896f86a818.novalocal,UID:ci-3510-3-6-b-896f86a818.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-b-896f86a818.novalocal,},FirstTimestamp:2024-12-13 03:44:46.108258273 +0000 UTC m=+1.144530191,LastTimestamp:2024-12-13 03:44:46.108258273 +0000 UTC m=+1.144530191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-b-896f86a818.novalocal,}"
Dec 13 03:44:46.146511 kubelet[1860]: I1213 03:44:46.146216    1860 factory.go:221] Registration of the containerd container factory successfully
Dec 13 03:44:46.146511 kubelet[1860]: I1213 03:44:46.146237    1860 factory.go:221] Registration of the systemd container factory successfully
Dec 13 03:44:46.146511 kubelet[1860]: I1213 03:44:46.146372    1860 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Dec 13 03:44:46.154000 audit[1875]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1875 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:46.154000 audit[1875]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc49cff700 a2=0 a3=7ffc49cff6ec items=0 ppid=1860 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:46.154000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C
Dec 13 03:44:46.171000 audit[1878]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1878 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:46.171000 audit[1878]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe178f4f70 a2=0 a3=7ffe178f4f5c items=0 ppid=1860 pid=1878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:46.171000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C
Dec 13 03:44:46.185000 audit[1881]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1881 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:46.185000 audit[1881]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff87f9bd40 a2=0 a3=7fff87f9bd2c items=0 ppid=1860 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:46.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38
Dec 13 03:44:46.187609 kubelet[1860]: I1213 03:44:46.187534    1860 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Dec 13 03:44:46.190000 audit[1882]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1882 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:44:46.190000 audit[1882]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffec53dc7e0 a2=0 a3=7ffec53dc7cc items=0 ppid=1860 pid=1882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:46.190000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65
Dec 13 03:44:46.193601 kubelet[1860]: I1213 03:44:46.193554    1860 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Dec 13 03:44:46.193601 kubelet[1860]: I1213 03:44:46.193607    1860 status_manager.go:217] "Starting to sync pod status with apiserver"
Dec 13 03:44:46.193802 kubelet[1860]: I1213 03:44:46.193648    1860 kubelet.go:2329] "Starting kubelet main sync loop"
Dec 13 03:44:46.193802 kubelet[1860]: E1213 03:44:46.193719    1860 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Dec 13 03:44:46.193000 audit[1883]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1883 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:46.193000 audit[1883]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9ffbf600 a2=0 a3=7ffc9ffbf5ec items=0 ppid=1860 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:46.193000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65
Dec 13 03:44:46.195691 kubelet[1860]: W1213 03:44:46.195655    1860 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:46.195863 kubelet[1860]: E1213 03:44:46.195845    1860 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:46.194000 audit[1885]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1885 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:44:46.194000 audit[1885]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6a142950 a2=0 a3=7ffe6a14293c items=0 ppid=1860 pid=1885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:46.194000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65
Dec 13 03:44:46.195000 audit[1886]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:46.195000 audit[1886]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb91f4b30 a2=0 a3=7ffdb91f4b1c items=0 ppid=1860 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:46.195000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174
Dec 13 03:44:46.196000 audit[1889]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=1889 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:44:46.196000 audit[1889]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff82f93d70 a2=0 a3=7fff82f93d5c items=0 ppid=1860 pid=1889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:46.196000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572
Dec 13 03:44:46.199177 kubelet[1860]: I1213 03:44:46.199131    1860 cpu_manager.go:214] "Starting CPU manager" policy="none"
Dec 13 03:44:46.199177 kubelet[1860]: I1213 03:44:46.199160    1860 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Dec 13 03:44:46.199177 kubelet[1860]: I1213 03:44:46.199180    1860 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 03:44:46.198000 audit[1888]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1888 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:44:46.198000 audit[1888]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffce7c17b70 a2=0 a3=7ffce7c17b5c items=0 ppid=1860 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:46.198000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174
Dec 13 03:44:46.199000 audit[1890]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:44:46.199000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe38322d40 a2=0 a3=7ffe38322d2c items=0 ppid=1860 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:46.199000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572
Dec 13 03:44:46.206366 kubelet[1860]: I1213 03:44:46.206292    1860 policy_none.go:49] "None policy: Start"
Dec 13 03:44:46.207139 kubelet[1860]: I1213 03:44:46.207120    1860 memory_manager.go:170] "Starting memorymanager" policy="None"
Dec 13 03:44:46.207237 kubelet[1860]: I1213 03:44:46.207153    1860 state_mem.go:35] "Initializing new in-memory state store"
Dec 13 03:44:46.216496 kubelet[1860]: I1213 03:44:46.216460    1860 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Dec 13 03:44:46.215000 audit[1860]: AVC avc:  denied  { mac_admin } for  pid=1860 comm="kubelet" capability=33  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:44:46.215000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0"
Dec 13 03:44:46.215000 audit[1860]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bde090 a1=c000ad6720 a2=c000bde060 a3=25 items=0 ppid=1 pid=1860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:46.215000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669
Dec 13 03:44:46.216799 kubelet[1860]: I1213 03:44:46.216575    1860 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument"
Dec 13 03:44:46.216799 kubelet[1860]: I1213 03:44:46.216795    1860 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Dec 13 03:44:46.219864 kubelet[1860]: E1213 03:44:46.219821    1860 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-6-b-896f86a818.novalocal\" not found"
Dec 13 03:44:46.229590 kubelet[1860]: I1213 03:44:46.229560    1860 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:46.230319 kubelet[1860]: E1213 03:44:46.230286    1860 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.219:6443/api/v1/nodes\": dial tcp 172.24.4.219:6443: connect: connection refused" node="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:46.296565 kubelet[1860]: I1213 03:44:46.294875    1860 topology_manager.go:215] "Topology Admit Handler" podUID="9bc99952de55371aae7813b014576fcc" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:46.304031 kubelet[1860]: I1213 03:44:46.303987    1860 topology_manager.go:215] "Topology Admit Handler" podUID="e00560e7712d497af8cc28c146c80716" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:46.311811 kubelet[1860]: I1213 03:44:46.311765    1860 topology_manager.go:215] "Topology Admit Handler" podUID="8d35083900afbbc848112f083585b309" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:46.342577 kubelet[1860]: E1213 03:44:46.342502    1860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-b-896f86a818.novalocal?timeout=10s\": dial tcp 172.24.4.219:6443: connect: connection refused" interval="400ms"
Dec 13 03:44:46.432144 kubelet[1860]: I1213 03:44:46.432067    1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9bc99952de55371aae7813b014576fcc-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"9bc99952de55371aae7813b014576fcc\") " pod="kube-system/kube-apiserver-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:46.432404 kubelet[1860]: I1213 03:44:46.432278    1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9bc99952de55371aae7813b014576fcc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"9bc99952de55371aae7813b014576fcc\") " pod="kube-system/kube-apiserver-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:46.432404 kubelet[1860]: I1213 03:44:46.432401    1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e00560e7712d497af8cc28c146c80716-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"e00560e7712d497af8cc28c146c80716\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:46.432579 kubelet[1860]: I1213 03:44:46.432519    1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e00560e7712d497af8cc28c146c80716-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"e00560e7712d497af8cc28c146c80716\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:46.432658 kubelet[1860]: I1213 03:44:46.432626    1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9bc99952de55371aae7813b014576fcc-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"9bc99952de55371aae7813b014576fcc\") " pod="kube-system/kube-apiserver-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:46.432826 kubelet[1860]: I1213 03:44:46.432772    1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e00560e7712d497af8cc28c146c80716-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"e00560e7712d497af8cc28c146c80716\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:46.432962 kubelet[1860]: I1213 03:44:46.432915    1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e00560e7712d497af8cc28c146c80716-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"e00560e7712d497af8cc28c146c80716\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:46.433061 kubelet[1860]: I1213 03:44:46.433029    1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e00560e7712d497af8cc28c146c80716-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"e00560e7712d497af8cc28c146c80716\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:46.433226 kubelet[1860]: I1213 03:44:46.433184    1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d35083900afbbc848112f083585b309-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"8d35083900afbbc848112f083585b309\") " pod="kube-system/kube-scheduler-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:46.435293 kubelet[1860]: I1213 03:44:46.435234    1860 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:46.436728 kubelet[1860]: E1213 03:44:46.436693    1860 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.219:6443/api/v1/nodes\": dial tcp 172.24.4.219:6443: connect: connection refused" node="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:46.629252 env[1255]: time="2024-12-13T03:44:46.628550668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-b-896f86a818.novalocal,Uid:9bc99952de55371aae7813b014576fcc,Namespace:kube-system,Attempt:0,}"
Dec 13 03:44:46.633663 env[1255]: time="2024-12-13T03:44:46.633570113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-b-896f86a818.novalocal,Uid:8d35083900afbbc848112f083585b309,Namespace:kube-system,Attempt:0,}"
Dec 13 03:44:46.636848 env[1255]: time="2024-12-13T03:44:46.636763420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal,Uid:e00560e7712d497af8cc28c146c80716,Namespace:kube-system,Attempt:0,}"
Dec 13 03:44:46.744130 kubelet[1860]: E1213 03:44:46.744056    1860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-b-896f86a818.novalocal?timeout=10s\": dial tcp 172.24.4.219:6443: connect: connection refused" interval="800ms"
Dec 13 03:44:46.841603 kubelet[1860]: I1213 03:44:46.841537    1860 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:46.842305 kubelet[1860]: E1213 03:44:46.842265    1860 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.219:6443/api/v1/nodes\": dial tcp 172.24.4.219:6443: connect: connection refused" node="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:47.284050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1312509663.mount: Deactivated successfully.
Dec 13 03:44:47.299811 env[1255]: time="2024-12-13T03:44:47.299719379Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:47.303906 env[1255]: time="2024-12-13T03:44:47.303837352Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:47.305754 env[1255]: time="2024-12-13T03:44:47.305697685Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:47.309283 env[1255]: time="2024-12-13T03:44:47.309225930Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:47.311087 env[1255]: time="2024-12-13T03:44:47.311027813Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:47.313920 env[1255]: time="2024-12-13T03:44:47.313846517Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:47.317932 env[1255]: time="2024-12-13T03:44:47.317863570Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:47.325789 env[1255]: time="2024-12-13T03:44:47.325709142Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:47.331509 env[1255]: time="2024-12-13T03:44:47.331438581Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:47.332651 env[1255]: time="2024-12-13T03:44:47.332598588Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:47.333636 env[1255]: time="2024-12-13T03:44:47.333596672Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:47.334400 env[1255]: time="2024-12-13T03:44:47.334306315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:44:47.376856 env[1255]: time="2024-12-13T03:44:47.376301822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 03:44:47.376856 env[1255]: time="2024-12-13T03:44:47.376469568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 03:44:47.376856 env[1255]: time="2024-12-13T03:44:47.376503371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 03:44:47.377593 env[1255]: time="2024-12-13T03:44:47.377439780Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/407779b0a3cb994a8c2727fa037e45e968fc23e48364d0cbeb580b69feb0292f pid=1898 runtime=io.containerd.runc.v2
Dec 13 03:44:47.448518 env[1255]: time="2024-12-13T03:44:47.448452645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 03:44:47.448761 env[1255]: time="2024-12-13T03:44:47.448733752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 03:44:47.448857 env[1255]: time="2024-12-13T03:44:47.448833990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 03:44:47.449085 env[1255]: time="2024-12-13T03:44:47.449057100Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/85d56c5b214d878d72fb200d7d06d42c18e78fd50a4bdf06a206703fe481f89a pid=1935 runtime=io.containerd.runc.v2
Dec 13 03:44:47.475486 env[1255]: time="2024-12-13T03:44:47.475438909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-b-896f86a818.novalocal,Uid:8d35083900afbbc848112f083585b309,Namespace:kube-system,Attempt:0,} returns sandbox id \"407779b0a3cb994a8c2727fa037e45e968fc23e48364d0cbeb580b69feb0292f\""
Dec 13 03:44:47.481015 env[1255]: time="2024-12-13T03:44:47.480979742Z" level=info msg="CreateContainer within sandbox \"407779b0a3cb994a8c2727fa037e45e968fc23e48364d0cbeb580b69feb0292f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Dec 13 03:44:47.482432 env[1255]: time="2024-12-13T03:44:47.482325349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 03:44:47.482504 env[1255]: time="2024-12-13T03:44:47.482463700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 03:44:47.482541 env[1255]: time="2024-12-13T03:44:47.482502643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 03:44:47.482826 env[1255]: time="2024-12-13T03:44:47.482773831Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4993b8b248cbb472c500c84b00216664bb44236fcd600f8d2f585e8929b205b2 pid=1957 runtime=io.containerd.runc.v2
Dec 13 03:44:47.527474 kubelet[1860]: W1213 03:44:47.527383    1860 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-b-896f86a818.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:47.527935 kubelet[1860]: E1213 03:44:47.527488    1860 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-b-896f86a818.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:47.546546 env[1255]: time="2024-12-13T03:44:47.544095018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-b-896f86a818.novalocal,Uid:9bc99952de55371aae7813b014576fcc,Namespace:kube-system,Attempt:0,} returns sandbox id \"85d56c5b214d878d72fb200d7d06d42c18e78fd50a4bdf06a206703fe481f89a\""
Dec 13 03:44:47.546672 kubelet[1860]: E1213 03:44:47.544714    1860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-b-896f86a818.novalocal?timeout=10s\": dial tcp 172.24.4.219:6443: connect: connection refused" interval="1.6s"
Dec 13 03:44:47.550664 env[1255]: time="2024-12-13T03:44:47.550248402Z" level=info msg="CreateContainer within sandbox \"85d56c5b214d878d72fb200d7d06d42c18e78fd50a4bdf06a206703fe481f89a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Dec 13 03:44:47.561346 kubelet[1860]: W1213 03:44:47.560774    1860 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:47.561346 kubelet[1860]: E1213 03:44:47.560854    1860 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:47.562087 env[1255]: time="2024-12-13T03:44:47.562045605Z" level=info msg="CreateContainer within sandbox \"407779b0a3cb994a8c2727fa037e45e968fc23e48364d0cbeb580b69feb0292f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c8505f6c16f8a3267fe466745b931c028e02ea576b5cdd98a1fc7e47323f1f6e\""
Dec 13 03:44:47.562670 env[1255]: time="2024-12-13T03:44:47.562640071Z" level=info msg="StartContainer for \"c8505f6c16f8a3267fe466745b931c028e02ea576b5cdd98a1fc7e47323f1f6e\""
Dec 13 03:44:47.585187 env[1255]: time="2024-12-13T03:44:47.585139419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal,Uid:e00560e7712d497af8cc28c146c80716,Namespace:kube-system,Attempt:0,} returns sandbox id \"4993b8b248cbb472c500c84b00216664bb44236fcd600f8d2f585e8929b205b2\""
Dec 13 03:44:47.593946 env[1255]: time="2024-12-13T03:44:47.591364078Z" level=info msg="CreateContainer within sandbox \"4993b8b248cbb472c500c84b00216664bb44236fcd600f8d2f585e8929b205b2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Dec 13 03:44:47.621204 env[1255]: time="2024-12-13T03:44:47.621153696Z" level=info msg="CreateContainer within sandbox \"85d56c5b214d878d72fb200d7d06d42c18e78fd50a4bdf06a206703fe481f89a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"761f7cfdebf1c725489684386e2f8ef382eb7bc4465b57418cc3aa8f875e7c27\""
Dec 13 03:44:47.621693 env[1255]: time="2024-12-13T03:44:47.621663373Z" level=info msg="StartContainer for \"761f7cfdebf1c725489684386e2f8ef382eb7bc4465b57418cc3aa8f875e7c27\""
Dec 13 03:44:47.635099 kubelet[1860]: W1213 03:44:47.635049    1860 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:47.635302 kubelet[1860]: E1213 03:44:47.635288    1860 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:47.638958 env[1255]: time="2024-12-13T03:44:47.638905780Z" level=info msg="CreateContainer within sandbox \"4993b8b248cbb472c500c84b00216664bb44236fcd600f8d2f585e8929b205b2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dc9bd23fc5e6c008b1780a3da4244fcca4d944e0ae4dd26e36fa876f8deba5f3\""
Dec 13 03:44:47.640310 env[1255]: time="2024-12-13T03:44:47.640279279Z" level=info msg="StartContainer for \"dc9bd23fc5e6c008b1780a3da4244fcca4d944e0ae4dd26e36fa876f8deba5f3\""
Dec 13 03:44:47.644272 kubelet[1860]: I1213 03:44:47.644236    1860 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:47.644691 kubelet[1860]: E1213 03:44:47.644641    1860 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.219:6443/api/v1/nodes\": dial tcp 172.24.4.219:6443: connect: connection refused" node="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:47.671583 env[1255]: time="2024-12-13T03:44:47.670961764Z" level=info msg="StartContainer for \"c8505f6c16f8a3267fe466745b931c028e02ea576b5cdd98a1fc7e47323f1f6e\" returns successfully"
Dec 13 03:44:47.692133 kubelet[1860]: W1213 03:44:47.692006    1860 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.219:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:47.692133 kubelet[1860]: E1213 03:44:47.692092    1860 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.219:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:47.745859 env[1255]: time="2024-12-13T03:44:47.745783581Z" level=info msg="StartContainer for \"761f7cfdebf1c725489684386e2f8ef382eb7bc4465b57418cc3aa8f875e7c27\" returns successfully"
Dec 13 03:44:47.789149 env[1255]: time="2024-12-13T03:44:47.789086633Z" level=info msg="StartContainer for \"dc9bd23fc5e6c008b1780a3da4244fcca4d944e0ae4dd26e36fa876f8deba5f3\" returns successfully"
Dec 13 03:44:48.178199 kubelet[1860]: E1213 03:44:48.178155    1860 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.219:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.219:6443: connect: connection refused
Dec 13 03:44:49.248279 kubelet[1860]: I1213 03:44:49.248244    1860 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:50.529351 kubelet[1860]: E1213 03:44:50.529290    1860 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-6-b-896f86a818.novalocal\" not found" node="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:50.629947 kubelet[1860]: I1213 03:44:50.629911    1860 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:51.102048 kubelet[1860]: I1213 03:44:51.102006    1860 apiserver.go:52] "Watching apiserver"
Dec 13 03:44:51.130565 kubelet[1860]: I1213 03:44:51.130473    1860 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Dec 13 03:44:51.648452 kubelet[1860]: W1213 03:44:51.648302    1860 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 03:44:53.985699 systemd[1]: Reloading.
Dec 13 03:44:54.132600 /usr/lib/systemd/system-generators/torcx-generator[2142]: time="2024-12-13T03:44:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 03:44:54.133028 /usr/lib/systemd/system-generators/torcx-generator[2142]: time="2024-12-13T03:44:54Z" level=info msg="torcx already run"
Dec 13 03:44:54.218366 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 03:44:54.218390 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 03:44:54.247709 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 03:44:54.388307 kubelet[1860]: I1213 03:44:54.388260    1860 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 03:44:54.394370 systemd[1]: Stopping kubelet.service...
Dec 13 03:44:54.411483 systemd[1]: kubelet.service: Deactivated successfully.
Dec 13 03:44:54.411855 systemd[1]: Stopped kubelet.service.
Dec 13 03:44:54.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:54.413056 kernel: kauditd_printk_skb: 43 callbacks suppressed
Dec 13 03:44:54.413178 kernel: audit: type=1131 audit(1734061494.410:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:54.422662 systemd[1]: Starting kubelet.service...
Dec 13 03:44:57.181106 systemd[1]: Started kubelet.service.
Dec 13 03:44:57.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:57.191649 kernel: audit: type=1130 audit(1734061497.180:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:44:57.327369 kubelet[2205]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 03:44:57.327719 kubelet[2205]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Dec 13 03:44:57.327967 kubelet[2205]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 03:44:57.328100 kubelet[2205]: I1213 03:44:57.328071    2205 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Dec 13 03:44:57.344805 kubelet[2205]: I1213 03:44:57.344756    2205 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Dec 13 03:44:57.344805 kubelet[2205]: I1213 03:44:57.344803    2205 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 13 03:44:57.345204 kubelet[2205]: I1213 03:44:57.345175    2205 server.go:919] "Client rotation is on, will bootstrap in background"
Dec 13 03:44:57.349602 kubelet[2205]: I1213 03:44:57.349577    2205 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Dec 13 03:44:57.404235 kubelet[2205]: I1213 03:44:57.404179    2205 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 03:44:57.419627 kubelet[2205]: I1213 03:44:57.419566    2205 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Dec 13 03:44:57.420128 kubelet[2205]: I1213 03:44:57.420098    2205 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 13 03:44:57.420361 kubelet[2205]: I1213 03:44:57.420295    2205 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Dec 13 03:44:57.420361 kubelet[2205]: I1213 03:44:57.420325    2205 topology_manager.go:138] "Creating topology manager with none policy"
Dec 13 03:44:57.420361 kubelet[2205]: I1213 03:44:57.420357    2205 container_manager_linux.go:301] "Creating device plugin manager"
Dec 13 03:44:57.420645 kubelet[2205]: I1213 03:44:57.420389    2205 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 03:44:57.420645 kubelet[2205]: I1213 03:44:57.420486    2205 kubelet.go:396] "Attempting to sync node with API server"
Dec 13 03:44:57.420645 kubelet[2205]: I1213 03:44:57.420502    2205 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Dec 13 03:44:57.421408 kubelet[2205]: I1213 03:44:57.421378    2205 kubelet.go:312] "Adding apiserver pod source"
Dec 13 03:44:57.421585 kubelet[2205]: I1213 03:44:57.421566    2205 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Dec 13 03:44:57.571599 kubelet[2205]: I1213 03:44:57.571545    2205 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Dec 13 03:44:57.572507 kubelet[2205]: I1213 03:44:57.572475    2205 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Dec 13 03:44:57.585587 kubelet[2205]: I1213 03:44:57.578581    2205 server.go:1256] "Started kubelet"
Dec 13 03:44:57.585928 kubelet[2205]: I1213 03:44:57.584300    2205 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Dec 13 03:44:57.586365 kubelet[2205]: I1213 03:44:57.586247    2205 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Dec 13 03:44:57.586365 kubelet[2205]: E1213 03:44:57.585119    2205 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Dec 13 03:44:57.586365 kubelet[2205]: I1213 03:44:57.585428    2205 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Dec 13 03:44:57.588145 kubelet[2205]: I1213 03:44:57.588091    2205 server.go:461] "Adding debug handlers to kubelet server"
Dec 13 03:44:57.588000 audit[2205]: AVC avc:  denied  { mac_admin } for  pid=2205 comm="kubelet" capability=33  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:44:57.590768 kubelet[2205]: I1213 03:44:57.590735    2205 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument"
Dec 13 03:44:57.591030 kubelet[2205]: I1213 03:44:57.591005    2205 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument"
Dec 13 03:44:57.597811 kubelet[2205]: I1213 03:44:57.591230    2205 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Dec 13 03:44:57.598417 kubelet[2205]: I1213 03:44:57.598387    2205 volume_manager.go:291] "Starting Kubelet Volume Manager"
Dec 13 03:44:57.598772 kubelet[2205]: I1213 03:44:57.598741    2205 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Dec 13 03:44:57.599180 kubelet[2205]: I1213 03:44:57.599151    2205 reconciler_new.go:29] "Reconciler: start to sync state"
Dec 13 03:44:57.600391 kernel: audit: type=1400 audit(1734061497.588:242): avc:  denied  { mac_admin } for  pid=2205 comm="kubelet" capability=33  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:44:57.588000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0"
Dec 13 03:44:57.607416 kernel: audit: type=1401 audit(1734061497.588:242): op=setxattr invalid_context="system_u:object_r:container_file_t:s0"
Dec 13 03:44:57.588000 audit[2205]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c3a480 a1=c00081dbc0 a2=c000c3a450 a3=25 items=0 ppid=1 pid=2205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:57.620482 kernel: audit: type=1300 audit(1734061497.588:242): arch=c000003e syscall=188 success=no exit=-22 a0=c000c3a480 a1=c00081dbc0 a2=c000c3a450 a3=25 items=0 ppid=1 pid=2205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:57.588000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669
Dec 13 03:44:57.625648 kubelet[2205]: I1213 03:44:57.624400    2205 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Dec 13 03:44:57.636438 kernel: audit: type=1327 audit(1734061497.588:242): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669
Dec 13 03:44:57.640647 kubelet[2205]: I1213 03:44:57.640614    2205 factory.go:221] Registration of the containerd container factory successfully
Dec 13 03:44:57.640920 kubelet[2205]: I1213 03:44:57.640894    2205 factory.go:221] Registration of the systemd container factory successfully
Dec 13 03:44:57.588000 audit[2205]: AVC avc:  denied  { mac_admin } for  pid=2205 comm="kubelet" capability=33  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:44:57.650406 kernel: audit: type=1400 audit(1734061497.588:243): avc:  denied  { mac_admin } for  pid=2205 comm="kubelet" capability=33  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:44:57.588000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0"
Dec 13 03:44:57.588000 audit[2205]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000847920 a1=c00081dbd8 a2=c000c3a510 a3=25 items=0 ppid=1 pid=2205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:57.658961 kernel: audit: type=1401 audit(1734061497.588:243): op=setxattr invalid_context="system_u:object_r:container_file_t:s0"
Dec 13 03:44:57.659099 kernel: audit: type=1300 audit(1734061497.588:243): arch=c000003e syscall=188 success=no exit=-22 a0=c000847920 a1=c00081dbd8 a2=c000c3a510 a3=25 items=0 ppid=1 pid=2205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:57.659169 kernel: audit: type=1327 audit(1734061497.588:243): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669
Dec 13 03:44:57.588000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669
Dec 13 03:44:57.831384 kubelet[2205]: I1213 03:44:57.831255    2205 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Dec 13 03:44:57.891100 kubelet[2205]: I1213 03:44:57.833846    2205 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Dec 13 03:44:57.891100 kubelet[2205]: I1213 03:44:57.833872    2205 status_manager.go:217] "Starting to sync pod status with apiserver"
Dec 13 03:44:57.891100 kubelet[2205]: I1213 03:44:57.833896    2205 kubelet.go:2329] "Starting kubelet main sync loop"
Dec 13 03:44:57.891100 kubelet[2205]: E1213 03:44:57.833964    2205 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Dec 13 03:44:57.891100 kubelet[2205]: I1213 03:44:57.864912    2205 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:57.927543 kubelet[2205]: I1213 03:44:57.927520    2205 cpu_manager.go:214] "Starting CPU manager" policy="none"
Dec 13 03:44:57.927705 kubelet[2205]: I1213 03:44:57.927694    2205 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Dec 13 03:44:57.927778 kubelet[2205]: I1213 03:44:57.927769    2205 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 03:44:57.928029 kubelet[2205]: I1213 03:44:57.928017    2205 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Dec 13 03:44:57.928115 kubelet[2205]: I1213 03:44:57.928105    2205 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Dec 13 03:44:57.928175 kubelet[2205]: I1213 03:44:57.928167    2205 policy_none.go:49] "None policy: Start"
Dec 13 03:44:57.929536 kubelet[2205]: I1213 03:44:57.929522    2205 memory_manager.go:170] "Starting memorymanager" policy="None"
Dec 13 03:44:57.929641 kubelet[2205]: I1213 03:44:57.929631    2205 state_mem.go:35] "Initializing new in-memory state store"
Dec 13 03:44:57.929983 kubelet[2205]: I1213 03:44:57.929968    2205 state_mem.go:75] "Updated machine memory state"
Dec 13 03:44:57.931760 kubelet[2205]: I1213 03:44:57.931719    2205 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Dec 13 03:44:57.930000 audit[2205]: AVC avc:  denied  { mac_admin } for  pid=2205 comm="kubelet" capability=33  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:44:57.930000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0"
Dec 13 03:44:57.930000 audit[2205]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c001207bf0 a1=c001210000 a2=c001207bc0 a3=25 items=0 ppid=1 pid=2205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:44:57.930000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669
Dec 13 03:44:57.932258 kubelet[2205]: I1213 03:44:57.932243    2205 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument"
Dec 13 03:44:57.935007 kubelet[2205]: I1213 03:44:57.934953    2205 topology_manager.go:215] "Topology Admit Handler" podUID="9bc99952de55371aae7813b014576fcc" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:57.935180 kubelet[2205]: I1213 03:44:57.935149    2205 topology_manager.go:215] "Topology Admit Handler" podUID="e00560e7712d497af8cc28c146c80716" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:57.935275 kubelet[2205]: I1213 03:44:57.935247    2205 topology_manager.go:215] "Topology Admit Handler" podUID="8d35083900afbbc848112f083585b309" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:57.935739 kubelet[2205]: I1213 03:44:57.935724    2205 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Dec 13 03:44:58.048031 kubelet[2205]: W1213 03:44:58.048009    2205 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 03:44:58.048286 kubelet[2205]: W1213 03:44:58.048275    2205 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 03:44:58.048989 kubelet[2205]: W1213 03:44:58.048974    2205 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 03:44:58.049122 kubelet[2205]: E1213 03:44:58.049108    2205 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-6-b-896f86a818.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:58.051688 kubelet[2205]: I1213 03:44:58.051669    2205 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:58.051842 kubelet[2205]: I1213 03:44:58.051829    2205 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:58.065884 kubelet[2205]: I1213 03:44:58.065859    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e00560e7712d497af8cc28c146c80716-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"e00560e7712d497af8cc28c146c80716\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:58.066148 kubelet[2205]: I1213 03:44:58.066133    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e00560e7712d497af8cc28c146c80716-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"e00560e7712d497af8cc28c146c80716\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:58.066261 kubelet[2205]: I1213 03:44:58.066248    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d35083900afbbc848112f083585b309-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"8d35083900afbbc848112f083585b309\") " pod="kube-system/kube-scheduler-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:58.066393 kubelet[2205]: I1213 03:44:58.066380    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9bc99952de55371aae7813b014576fcc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"9bc99952de55371aae7813b014576fcc\") " pod="kube-system/kube-apiserver-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:58.066509 kubelet[2205]: I1213 03:44:58.066497    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e00560e7712d497af8cc28c146c80716-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"e00560e7712d497af8cc28c146c80716\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:58.066602 kubelet[2205]: I1213 03:44:58.066592    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e00560e7712d497af8cc28c146c80716-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"e00560e7712d497af8cc28c146c80716\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:58.066726 kubelet[2205]: I1213 03:44:58.066715    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e00560e7712d497af8cc28c146c80716-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"e00560e7712d497af8cc28c146c80716\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:58.066830 kubelet[2205]: I1213 03:44:58.066819    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9bc99952de55371aae7813b014576fcc-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"9bc99952de55371aae7813b014576fcc\") " pod="kube-system/kube-apiserver-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:58.066941 kubelet[2205]: I1213 03:44:58.066926    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9bc99952de55371aae7813b014576fcc-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-b-896f86a818.novalocal\" (UID: \"9bc99952de55371aae7813b014576fcc\") " pod="kube-system/kube-apiserver-ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:44:58.427275 kubelet[2205]: I1213 03:44:58.427183    2205 apiserver.go:52] "Watching apiserver"
Dec 13 03:44:58.499983 kubelet[2205]: I1213 03:44:58.499916    2205 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Dec 13 03:44:58.986510 kubelet[2205]: I1213 03:44:58.984616    2205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-6-b-896f86a818.novalocal" podStartSLOduration=1.9845764959999999 podStartE2EDuration="1.984576496s" podCreationTimestamp="2024-12-13 03:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:44:58.98415815 +0000 UTC m=+1.762758182" watchObservedRunningTime="2024-12-13 03:44:58.984576496 +0000 UTC m=+1.763176528"
Dec 13 03:44:59.035142 kubelet[2205]: I1213 03:44:59.035113    2205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-6-b-896f86a818.novalocal" podStartSLOduration=8.035057256 podStartE2EDuration="8.035057256s" podCreationTimestamp="2024-12-13 03:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:44:59.010600792 +0000 UTC m=+1.789200844" watchObservedRunningTime="2024-12-13 03:44:59.035057256 +0000 UTC m=+1.813657288"
Dec 13 03:44:59.056582 kubelet[2205]: I1213 03:44:59.056558    2205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-6-b-896f86a818.novalocal" podStartSLOduration=2.056500956 podStartE2EDuration="2.056500956s" podCreationTimestamp="2024-12-13 03:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:44:59.036630688 +0000 UTC m=+1.815230730" watchObservedRunningTime="2024-12-13 03:44:59.056500956 +0000 UTC m=+1.835100988"
Dec 13 03:45:05.076722 sudo[1455]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:05.086210 kernel: kauditd_printk_skb: 4 callbacks suppressed
Dec 13 03:45:05.086345 kernel: audit: type=1106 audit(1734061505.075:245): pid=1455 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Dec 13 03:45:05.075000 audit[1455]: USER_END pid=1455 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Dec 13 03:45:05.075000 audit[1455]: CRED_DISP pid=1455 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Dec 13 03:45:05.093129 kernel: audit: type=1104 audit(1734061505.075:246): pid=1455 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Dec 13 03:45:05.561286 sshd[1446]: pam_unix(sshd:session): session closed for user core
Dec 13 03:45:05.562000 audit[1446]: USER_END pid=1446 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:45:05.568768 systemd[1]: sshd@6-172.24.4.219:22-172.24.4.1:59540.service: Deactivated successfully.
Dec 13 03:45:05.563000 audit[1446]: CRED_DISP pid=1446 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:45:05.571864 systemd[1]: session-7.scope: Deactivated successfully.
Dec 13 03:45:05.576191 systemd-logind[1241]: Session 7 logged out. Waiting for processes to exit.
Dec 13 03:45:05.579670 systemd-logind[1241]: Removed session 7.
Dec 13 03:45:05.594542 kernel: audit: type=1106 audit(1734061505.562:247): pid=1446 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:45:05.594763 kernel: audit: type=1104 audit(1734061505.563:248): pid=1446 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:45:05.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.219:22-172.24.4.1:59540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:45:05.604860 kernel: audit: type=1131 audit(1734061505.568:249): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.219:22-172.24.4.1:59540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:45:06.056100 kubelet[2205]: I1213 03:45:06.056066    2205 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Dec 13 03:45:06.057220 env[1255]: time="2024-12-13T03:45:06.057059605Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Dec 13 03:45:06.057789 kubelet[2205]: I1213 03:45:06.057326    2205 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Dec 13 03:45:06.967535 kubelet[2205]: I1213 03:45:06.967487    2205 topology_manager.go:215] "Topology Admit Handler" podUID="39aa317f-09de-4360-8a89-19bd14fc61b8" podNamespace="kube-system" podName="kube-proxy-fxcmj"
Dec 13 03:45:07.124947 kubelet[2205]: I1213 03:45:07.124858    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/39aa317f-09de-4360-8a89-19bd14fc61b8-kube-proxy\") pod \"kube-proxy-fxcmj\" (UID: \"39aa317f-09de-4360-8a89-19bd14fc61b8\") " pod="kube-system/kube-proxy-fxcmj"
Dec 13 03:45:07.124947 kubelet[2205]: I1213 03:45:07.124963    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39aa317f-09de-4360-8a89-19bd14fc61b8-lib-modules\") pod \"kube-proxy-fxcmj\" (UID: \"39aa317f-09de-4360-8a89-19bd14fc61b8\") " pod="kube-system/kube-proxy-fxcmj"
Dec 13 03:45:07.125787 kubelet[2205]: I1213 03:45:07.125023    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39aa317f-09de-4360-8a89-19bd14fc61b8-xtables-lock\") pod \"kube-proxy-fxcmj\" (UID: \"39aa317f-09de-4360-8a89-19bd14fc61b8\") " pod="kube-system/kube-proxy-fxcmj"
Dec 13 03:45:07.125787 kubelet[2205]: I1213 03:45:07.125084    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2nqs\" (UniqueName: \"kubernetes.io/projected/39aa317f-09de-4360-8a89-19bd14fc61b8-kube-api-access-p2nqs\") pod \"kube-proxy-fxcmj\" (UID: \"39aa317f-09de-4360-8a89-19bd14fc61b8\") " pod="kube-system/kube-proxy-fxcmj"
Dec 13 03:45:07.349893 kubelet[2205]: I1213 03:45:07.349636    2205 topology_manager.go:215] "Topology Admit Handler" podUID="9eefc508-c39a-4a3b-99f8-38342b78b0df" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-tvc2p"
Dec 13 03:45:07.529797 kubelet[2205]: I1213 03:45:07.529753    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvfmr\" (UniqueName: \"kubernetes.io/projected/9eefc508-c39a-4a3b-99f8-38342b78b0df-kube-api-access-fvfmr\") pod \"tigera-operator-c7ccbd65-tvc2p\" (UID: \"9eefc508-c39a-4a3b-99f8-38342b78b0df\") " pod="tigera-operator/tigera-operator-c7ccbd65-tvc2p"
Dec 13 03:45:07.530212 kubelet[2205]: I1213 03:45:07.530176    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9eefc508-c39a-4a3b-99f8-38342b78b0df-var-lib-calico\") pod \"tigera-operator-c7ccbd65-tvc2p\" (UID: \"9eefc508-c39a-4a3b-99f8-38342b78b0df\") " pod="tigera-operator/tigera-operator-c7ccbd65-tvc2p"
Dec 13 03:45:07.878207 env[1255]: time="2024-12-13T03:45:07.878066426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fxcmj,Uid:39aa317f-09de-4360-8a89-19bd14fc61b8,Namespace:kube-system,Attempt:0,}"
Dec 13 03:45:07.921169 env[1255]: time="2024-12-13T03:45:07.920478766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 03:45:07.921169 env[1255]: time="2024-12-13T03:45:07.920559117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 03:45:07.921169 env[1255]: time="2024-12-13T03:45:07.920590696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 03:45:07.921169 env[1255]: time="2024-12-13T03:45:07.920847468Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ded419f7e7a13acd30be905e7590e22612fb354a7d0570e55e1ef37745d86cd pid=2287 runtime=io.containerd.runc.v2
Dec 13 03:45:07.959236 env[1255]: time="2024-12-13T03:45:07.959153154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-tvc2p,Uid:9eefc508-c39a-4a3b-99f8-38342b78b0df,Namespace:tigera-operator,Attempt:0,}"
Dec 13 03:45:08.010816 env[1255]: time="2024-12-13T03:45:08.010751729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 03:45:08.011013 env[1255]: time="2024-12-13T03:45:08.010988353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 03:45:08.011137 env[1255]: time="2024-12-13T03:45:08.011113117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 03:45:08.013095 env[1255]: time="2024-12-13T03:45:08.011857945Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ecc30b0e27ee5d4149f3be6f60a924918e530ba140fb6177c8b83464bedabc33 pid=2322 runtime=io.containerd.runc.v2
Dec 13 03:45:08.013789 env[1255]: time="2024-12-13T03:45:08.013758300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fxcmj,Uid:39aa317f-09de-4360-8a89-19bd14fc61b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ded419f7e7a13acd30be905e7590e22612fb354a7d0570e55e1ef37745d86cd\""
Dec 13 03:45:08.018081 env[1255]: time="2024-12-13T03:45:08.018053057Z" level=info msg="CreateContainer within sandbox \"0ded419f7e7a13acd30be905e7590e22612fb354a7d0570e55e1ef37745d86cd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Dec 13 03:45:08.071488 env[1255]: time="2024-12-13T03:45:08.071450194Z" level=info msg="CreateContainer within sandbox \"0ded419f7e7a13acd30be905e7590e22612fb354a7d0570e55e1ef37745d86cd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6b595ab350fc4c0ea7d5bda9295bc98aeea21887258f909b35c2a14fa5705488\""
Dec 13 03:45:08.074702 env[1255]: time="2024-12-13T03:45:08.074024244Z" level=info msg="StartContainer for \"6b595ab350fc4c0ea7d5bda9295bc98aeea21887258f909b35c2a14fa5705488\""
Dec 13 03:45:08.086323 env[1255]: time="2024-12-13T03:45:08.086262513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-tvc2p,Uid:9eefc508-c39a-4a3b-99f8-38342b78b0df,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ecc30b0e27ee5d4149f3be6f60a924918e530ba140fb6177c8b83464bedabc33\""
Dec 13 03:45:08.088970 env[1255]: time="2024-12-13T03:45:08.088929618Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\""
Dec 13 03:45:08.164758 env[1255]: time="2024-12-13T03:45:08.164654353Z" level=info msg="StartContainer for \"6b595ab350fc4c0ea7d5bda9295bc98aeea21887258f909b35c2a14fa5705488\" returns successfully"
Dec 13 03:45:08.382000 audit[2423]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.390388 kernel: audit: type=1325 audit(1734061508.382:250): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.382000 audit[2423]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2f4fa610 a2=0 a3=7ffe2f4fa5fc items=0 ppid=2382 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.409900 kernel: audit: type=1300 audit(1734061508.382:250): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2f4fa610 a2=0 a3=7ffe2f4fa5fc items=0 ppid=2382 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.410098 kernel: audit: type=1327 audit(1734061508.382:250): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65
Dec 13 03:45:08.382000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65
Dec 13 03:45:08.382000 audit[2424]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.382000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1735c2e0 a2=0 a3=7fff1735c2cc items=0 ppid=2382 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.420076 kernel: audit: type=1325 audit(1734061508.382:251): table=nat:39 family=10 entries=1 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.420258 kernel: audit: type=1300 audit(1734061508.382:251): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1735c2e0 a2=0 a3=7fff1735c2cc items=0 ppid=2382 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.382000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174
Dec 13 03:45:08.382000 audit[2425]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.382000 audit[2425]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec9edc6e0 a2=0 a3=7ffec9edc6cc items=0 ppid=2382 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.382000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572
Dec 13 03:45:08.399000 audit[2422]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.399000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe62e2a830 a2=0 a3=7ffe62e2a81c items=0 ppid=2382 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.399000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65
Dec 13 03:45:08.410000 audit[2426]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.410000 audit[2426]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa8861af0 a2=0 a3=7fffa8861adc items=0 ppid=2382 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.410000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174
Dec 13 03:45:08.413000 audit[2427]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2427 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.413000 audit[2427]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff1d5352a0 a2=0 a3=7fff1d53528c items=0 ppid=2382 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572
Dec 13 03:45:08.480000 audit[2428]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.480000 audit[2428]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffffaa15910 a2=0 a3=7ffffaa158fc items=0 ppid=2382 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.480000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572
Dec 13 03:45:08.490000 audit[2430]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.490000 audit[2430]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd7fff2b90 a2=0 a3=7ffd7fff2b7c items=0 ppid=2382 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.490000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365
Dec 13 03:45:08.497000 audit[2433]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2433 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.497000 audit[2433]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffed362680 a2=0 a3=7fffed36266c items=0 ppid=2382 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.497000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669
Dec 13 03:45:08.500000 audit[2434]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2434 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.500000 audit[2434]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcadb488a0 a2=0 a3=7ffcadb4888c items=0 ppid=2382 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.500000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572
Dec 13 03:45:08.504000 audit[2436]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2436 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.504000 audit[2436]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1e8c44e0 a2=0 a3=7fff1e8c44cc items=0 ppid=2382 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.504000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453
Dec 13 03:45:08.506000 audit[2437]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.506000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec7eb8340 a2=0 a3=7ffec7eb832c items=0 ppid=2382 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.506000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572
Dec 13 03:45:08.510000 audit[2439]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.510000 audit[2439]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffed301c3b0 a2=0 a3=7ffed301c39c items=0 ppid=2382 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.510000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D
Dec 13 03:45:08.515000 audit[2442]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.515000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd90f88140 a2=0 a3=7ffd90f8812c items=0 ppid=2382 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.515000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53
Dec 13 03:45:08.517000 audit[2443]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.517000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc80c4bff0 a2=0 a3=7ffc80c4bfdc items=0 ppid=2382 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.517000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572
Dec 13 03:45:08.522000 audit[2445]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.522000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc4e94ba0 a2=0 a3=7ffdc4e94b8c items=0 ppid=2382 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.522000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244
Dec 13 03:45:08.525000 audit[2446]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.525000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd48491730 a2=0 a3=7ffd4849171c items=0 ppid=2382 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.525000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572
Dec 13 03:45:08.532000 audit[2448]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.532000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd5bd92f0 a2=0 a3=7fffd5bd92dc items=0 ppid=2382 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.532000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A
Dec 13 03:45:08.545000 audit[2451]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.545000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcdb4dffb0 a2=0 a3=7ffcdb4dff9c items=0 ppid=2382 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.545000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A
Dec 13 03:45:08.553000 audit[2454]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.553000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe13647b30 a2=0 a3=7ffe13647b1c items=0 ppid=2382 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.553000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D
Dec 13 03:45:08.555000 audit[2455]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.555000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe7c4149d0 a2=0 a3=7ffe7c4149bc items=0 ppid=2382 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.555000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174
Dec 13 03:45:08.559000 audit[2457]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2457 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.559000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffefea5dee0 a2=0 a3=7ffefea5decc items=0 ppid=2382 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.559000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553
Dec 13 03:45:08.563000 audit[2460]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.563000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe98305800 a2=0 a3=7ffe983057ec items=0 ppid=2382 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.563000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553
Dec 13 03:45:08.564000 audit[2461]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.564000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd908e610 a2=0 a3=7ffcd908e5fc items=0 ppid=2382 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174
Dec 13 03:45:08.567000 audit[2463]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2463 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Dec 13 03:45:08.567000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe9a88d7a0 a2=0 a3=7ffe9a88d78c items=0 ppid=2382 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.567000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47
Dec 13 03:45:08.598000 audit[2469]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:08.598000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd03f20910 a2=0 a3=7ffd03f208fc items=0 ppid=2382 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.598000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:08.610000 audit[2469]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:08.610000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd03f20910 a2=0 a3=7ffd03f208fc items=0 ppid=2382 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.610000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:08.611000 audit[2474]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.611000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc82bcdbd0 a2=0 a3=7ffc82bcdbbc items=0 ppid=2382 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.611000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572
Dec 13 03:45:08.616000 audit[2476]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.616000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffa3f64880 a2=0 a3=7fffa3f6486c items=0 ppid=2382 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.616000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963
Dec 13 03:45:08.620000 audit[2479]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.620000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffe5444110 a2=0 a3=7fffe54440fc items=0 ppid=2382 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.620000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276
Dec 13 03:45:08.622000 audit[2480]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.622000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6bf2b810 a2=0 a3=7fff6bf2b7fc items=0 ppid=2382 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.622000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572
Dec 13 03:45:08.625000 audit[2482]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.625000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc25ccd30 a2=0 a3=7fffc25ccd1c items=0 ppid=2382 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.625000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453
Dec 13 03:45:08.627000 audit[2483]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.627000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9a300910 a2=0 a3=7ffd9a3008fc items=0 ppid=2382 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.627000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572
Dec 13 03:45:08.630000 audit[2485]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2485 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.630000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc865da320 a2=0 a3=7ffc865da30c items=0 ppid=2382 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.630000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245
Dec 13 03:45:08.634000 audit[2488]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.634000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd1ee55e50 a2=0 a3=7ffd1ee55e3c items=0 ppid=2382 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.634000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D
Dec 13 03:45:08.635000 audit[2489]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.635000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffce8d9eb0 a2=0 a3=7fffce8d9e9c items=0 ppid=2382 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.635000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572
Dec 13 03:45:08.638000 audit[2491]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.638000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf769c720 a2=0 a3=7ffcf769c70c items=0 ppid=2382 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.638000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244
Dec 13 03:45:08.639000 audit[2492]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.639000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd18e97d50 a2=0 a3=7ffd18e97d3c items=0 ppid=2382 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.639000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572
Dec 13 03:45:08.643000 audit[2494]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2494 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.643000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff88291060 a2=0 a3=7fff8829104c items=0 ppid=2382 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.643000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A
Dec 13 03:45:08.648000 audit[2497]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.648000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffcfd94f60 a2=0 a3=7fffcfd94f4c items=0 ppid=2382 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.648000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D
Dec 13 03:45:08.660000 audit[2500]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2500 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.660000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcc0e392d0 a2=0 a3=7ffcc0e392bc items=0 ppid=2382 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.660000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C
Dec 13 03:45:08.662000 audit[2501]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.662000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffef19daf30 a2=0 a3=7ffef19daf1c items=0 ppid=2382 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.662000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174
Dec 13 03:45:08.665000 audit[2503]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.665000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff17f227d0 a2=0 a3=7fff17f227bc items=0 ppid=2382 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.665000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553
Dec 13 03:45:08.670000 audit[2506]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2506 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.670000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffebdc9aed0 a2=0 a3=7ffebdc9aebc items=0 ppid=2382 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.670000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553
Dec 13 03:45:08.674000 audit[2507]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2507 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.674000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfdb2e270 a2=0 a3=7ffcfdb2e25c items=0 ppid=2382 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.674000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174
Dec 13 03:45:08.677000 audit[2509]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.677000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd57c048c0 a2=0 a3=7ffd57c048ac items=0 ppid=2382 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.677000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47
Dec 13 03:45:08.678000 audit[2510]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.678000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd8a2c9b0 a2=0 a3=7ffcd8a2c99c items=0 ppid=2382 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.678000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572
Dec 13 03:45:08.681000 audit[2512]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2512 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.681000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcc8bbf7b0 a2=0 a3=7ffcc8bbf79c items=0 ppid=2382 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.681000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C
Dec 13 03:45:08.685000 audit[2515]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2515 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Dec 13 03:45:08.685000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe23a0d2d0 a2=0 a3=7ffe23a0d2bc items=0 ppid=2382 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.685000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C
Dec 13 03:45:08.689000 audit[2517]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto"
Dec 13 03:45:08.689000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffef3bd6b80 a2=0 a3=7ffef3bd6b6c items=0 ppid=2382 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.689000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:08.690000 audit[2517]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto"
Dec 13 03:45:08.690000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffef3bd6b80 a2=0 a3=7ffef3bd6b6c items=0 ppid=2382 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:08.690000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:08.998684 kubelet[2205]: I1213 03:45:08.998398    2205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fxcmj" podStartSLOduration=2.998275015 podStartE2EDuration="2.998275015s" podCreationTimestamp="2024-12-13 03:45:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:45:08.997797959 +0000 UTC m=+11.776398041" watchObservedRunningTime="2024-12-13 03:45:08.998275015 +0000 UTC m=+11.776875097"
Dec 13 03:45:10.953567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount496160349.mount: Deactivated successfully.
Dec 13 03:45:13.082588 env[1255]: time="2024-12-13T03:45:13.082500214Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:13.087139 env[1255]: time="2024-12-13T03:45:13.087085765Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:13.096123 env[1255]: time="2024-12-13T03:45:13.096042017Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:13.100539 env[1255]: time="2024-12-13T03:45:13.100477818Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:13.102104 env[1255]: time="2024-12-13T03:45:13.102013509Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\""
Dec 13 03:45:13.109820 env[1255]: time="2024-12-13T03:45:13.109745073Z" level=info msg="CreateContainer within sandbox \"ecc30b0e27ee5d4149f3be6f60a924918e530ba140fb6177c8b83464bedabc33\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}"
Dec 13 03:45:13.146724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3114919664.mount: Deactivated successfully.
Dec 13 03:45:13.155327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3404918309.mount: Deactivated successfully.
Dec 13 03:45:13.168410 env[1255]: time="2024-12-13T03:45:13.168259452Z" level=info msg="CreateContainer within sandbox \"ecc30b0e27ee5d4149f3be6f60a924918e530ba140fb6177c8b83464bedabc33\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"59fc35b6c15ef7e14ce5a0c0177b1a64f3b59a6aedcaec65be98c541dd59c61f\""
Dec 13 03:45:13.170019 env[1255]: time="2024-12-13T03:45:13.169957006Z" level=info msg="StartContainer for \"59fc35b6c15ef7e14ce5a0c0177b1a64f3b59a6aedcaec65be98c541dd59c61f\""
Dec 13 03:45:13.773815 env[1255]: time="2024-12-13T03:45:13.773677108Z" level=info msg="StartContainer for \"59fc35b6c15ef7e14ce5a0c0177b1a64f3b59a6aedcaec65be98c541dd59c61f\" returns successfully"
Dec 13 03:45:16.753000 audit[2558]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2558 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:16.756012 kernel: kauditd_printk_skb: 148 callbacks suppressed
Dec 13 03:45:16.756154 kernel: audit: type=1325 audit(1734061516.753:301): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2558 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:16.753000 audit[2558]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc0a83f970 a2=0 a3=7ffc0a83f95c items=0 ppid=2382 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:16.769330 kernel: audit: type=1300 audit(1734061516.753:301): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc0a83f970 a2=0 a3=7ffc0a83f95c items=0 ppid=2382 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:16.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:16.781422 kernel: audit: type=1327 audit(1734061516.753:301): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:16.774000 audit[2558]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2558 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:16.792393 kernel: audit: type=1325 audit(1734061516.774:302): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2558 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:16.774000 audit[2558]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc0a83f970 a2=0 a3=0 items=0 ppid=2382 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:16.774000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:16.803553 kernel: audit: type=1300 audit(1734061516.774:302): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc0a83f970 a2=0 a3=0 items=0 ppid=2382 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:16.803629 kernel: audit: type=1327 audit(1734061516.774:302): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:16.803000 audit[2560]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2560 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:16.808392 kernel: audit: type=1325 audit(1734061516.803:303): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2560 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:16.803000 audit[2560]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc0dfba200 a2=0 a3=7ffc0dfba1ec items=0 ppid=2382 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:16.815354 kernel: audit: type=1300 audit(1734061516.803:303): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc0dfba200 a2=0 a3=7ffc0dfba1ec items=0 ppid=2382 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:16.803000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:16.821368 kernel: audit: type=1327 audit(1734061516.803:303): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:16.817000 audit[2560]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2560 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:16.817000 audit[2560]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc0dfba200 a2=0 a3=0 items=0 ppid=2382 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:16.817000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:16.830365 kernel: audit: type=1325 audit(1734061516.817:304): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2560 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:16.985099 kubelet[2205]: I1213 03:45:16.985055    2205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-tvc2p" podStartSLOduration=4.969794257 podStartE2EDuration="9.984987676s" podCreationTimestamp="2024-12-13 03:45:07 +0000 UTC" firstStartedPulling="2024-12-13 03:45:08.087445303 +0000 UTC m=+10.866045335" lastFinishedPulling="2024-12-13 03:45:13.102638672 +0000 UTC m=+15.881238754" observedRunningTime="2024-12-13 03:45:13.971326221 +0000 UTC m=+16.749926303" watchObservedRunningTime="2024-12-13 03:45:16.984987676 +0000 UTC m=+19.763587718"
Dec 13 03:45:16.985565 kubelet[2205]: I1213 03:45:16.985256    2205 topology_manager.go:215] "Topology Admit Handler" podUID="b4413df6-49b9-4c93-8564-da9a9503da3b" podNamespace="calico-system" podName="calico-typha-867954bcfd-jlm5s"
Dec 13 03:45:17.095187 kubelet[2205]: I1213 03:45:17.095093    2205 topology_manager.go:215] "Topology Admit Handler" podUID="3c8837d2-6d78-4258-820f-673e264901b0" podNamespace="calico-system" podName="calico-node-pwl98"
Dec 13 03:45:17.101796 kubelet[2205]: I1213 03:45:17.101757    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txqfn\" (UniqueName: \"kubernetes.io/projected/b4413df6-49b9-4c93-8564-da9a9503da3b-kube-api-access-txqfn\") pod \"calico-typha-867954bcfd-jlm5s\" (UID: \"b4413df6-49b9-4c93-8564-da9a9503da3b\") " pod="calico-system/calico-typha-867954bcfd-jlm5s"
Dec 13 03:45:17.102033 kubelet[2205]: I1213 03:45:17.101824    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b4413df6-49b9-4c93-8564-da9a9503da3b-typha-certs\") pod \"calico-typha-867954bcfd-jlm5s\" (UID: \"b4413df6-49b9-4c93-8564-da9a9503da3b\") " pod="calico-system/calico-typha-867954bcfd-jlm5s"
Dec 13 03:45:17.102033 kubelet[2205]: I1213 03:45:17.101860    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4413df6-49b9-4c93-8564-da9a9503da3b-tigera-ca-bundle\") pod \"calico-typha-867954bcfd-jlm5s\" (UID: \"b4413df6-49b9-4c93-8564-da9a9503da3b\") " pod="calico-system/calico-typha-867954bcfd-jlm5s"
Dec 13 03:45:17.204275 kubelet[2205]: I1213 03:45:17.204193    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c8837d2-6d78-4258-820f-673e264901b0-xtables-lock\") pod \"calico-node-pwl98\" (UID: \"3c8837d2-6d78-4258-820f-673e264901b0\") " pod="calico-system/calico-node-pwl98"
Dec 13 03:45:17.204443 kubelet[2205]: I1213 03:45:17.204286    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c8837d2-6d78-4258-820f-673e264901b0-tigera-ca-bundle\") pod \"calico-node-pwl98\" (UID: \"3c8837d2-6d78-4258-820f-673e264901b0\") " pod="calico-system/calico-node-pwl98"
Dec 13 03:45:17.204443 kubelet[2205]: I1213 03:45:17.204400    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3c8837d2-6d78-4258-820f-673e264901b0-cni-net-dir\") pod \"calico-node-pwl98\" (UID: \"3c8837d2-6d78-4258-820f-673e264901b0\") " pod="calico-system/calico-node-pwl98"
Dec 13 03:45:17.204542 kubelet[2205]: I1213 03:45:17.204475    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzb45\" (UniqueName: \"kubernetes.io/projected/3c8837d2-6d78-4258-820f-673e264901b0-kube-api-access-hzb45\") pod \"calico-node-pwl98\" (UID: \"3c8837d2-6d78-4258-820f-673e264901b0\") " pod="calico-system/calico-node-pwl98"
Dec 13 03:45:17.205052 kubelet[2205]: I1213 03:45:17.205027    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3c8837d2-6d78-4258-820f-673e264901b0-cni-bin-dir\") pod \"calico-node-pwl98\" (UID: \"3c8837d2-6d78-4258-820f-673e264901b0\") " pod="calico-system/calico-node-pwl98"
Dec 13 03:45:17.205116 kubelet[2205]: I1213 03:45:17.205087    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3c8837d2-6d78-4258-820f-673e264901b0-var-run-calico\") pod \"calico-node-pwl98\" (UID: \"3c8837d2-6d78-4258-820f-673e264901b0\") " pod="calico-system/calico-node-pwl98"
Dec 13 03:45:17.205152 kubelet[2205]: I1213 03:45:17.205116    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3c8837d2-6d78-4258-820f-673e264901b0-policysync\") pod \"calico-node-pwl98\" (UID: \"3c8837d2-6d78-4258-820f-673e264901b0\") " pod="calico-system/calico-node-pwl98"
Dec 13 03:45:17.205186 kubelet[2205]: I1213 03:45:17.205161    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3c8837d2-6d78-4258-820f-673e264901b0-node-certs\") pod \"calico-node-pwl98\" (UID: \"3c8837d2-6d78-4258-820f-673e264901b0\") " pod="calico-system/calico-node-pwl98"
Dec 13 03:45:17.205220 kubelet[2205]: I1213 03:45:17.205191    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3c8837d2-6d78-4258-820f-673e264901b0-cni-log-dir\") pod \"calico-node-pwl98\" (UID: \"3c8837d2-6d78-4258-820f-673e264901b0\") " pod="calico-system/calico-node-pwl98"
Dec 13 03:45:17.205257 kubelet[2205]: I1213 03:45:17.205216    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3c8837d2-6d78-4258-820f-673e264901b0-flexvol-driver-host\") pod \"calico-node-pwl98\" (UID: \"3c8837d2-6d78-4258-820f-673e264901b0\") " pod="calico-system/calico-node-pwl98"
Dec 13 03:45:17.205291 kubelet[2205]: I1213 03:45:17.205262    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c8837d2-6d78-4258-820f-673e264901b0-lib-modules\") pod \"calico-node-pwl98\" (UID: \"3c8837d2-6d78-4258-820f-673e264901b0\") " pod="calico-system/calico-node-pwl98"
Dec 13 03:45:17.205291 kubelet[2205]: I1213 03:45:17.205289    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c8837d2-6d78-4258-820f-673e264901b0-var-lib-calico\") pod \"calico-node-pwl98\" (UID: \"3c8837d2-6d78-4258-820f-673e264901b0\") " pod="calico-system/calico-node-pwl98"
Dec 13 03:45:17.246652 kubelet[2205]: I1213 03:45:17.246616    2205 topology_manager.go:215] "Topology Admit Handler" podUID="e67c0431-ca4e-483a-b78f-aa6377b70035" podNamespace="calico-system" podName="csi-node-driver-v5c5r"
Dec 13 03:45:17.246978 kubelet[2205]: E1213 03:45:17.246958    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v5c5r" podUID="e67c0431-ca4e-483a-b78f-aa6377b70035"
Dec 13 03:45:17.290781 env[1255]: time="2024-12-13T03:45:17.290235878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-867954bcfd-jlm5s,Uid:b4413df6-49b9-4c93-8564-da9a9503da3b,Namespace:calico-system,Attempt:0,}"
Dec 13 03:45:17.316551 kubelet[2205]: E1213 03:45:17.316477    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.316551 kubelet[2205]: W1213 03:45:17.316513    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.316551 kubelet[2205]: E1213 03:45:17.316548    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.316784 kubelet[2205]: E1213 03:45:17.316751    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.316784 kubelet[2205]: W1213 03:45:17.316768    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.316932 kubelet[2205]: E1213 03:45:17.316786    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.316932 kubelet[2205]: E1213 03:45:17.316920    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.316932 kubelet[2205]: W1213 03:45:17.316930    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.317023 kubelet[2205]: E1213 03:45:17.316943    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.317135 kubelet[2205]: E1213 03:45:17.317115    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.317135 kubelet[2205]: W1213 03:45:17.317132    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.317228 kubelet[2205]: E1213 03:45:17.317146    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.317319 kubelet[2205]: E1213 03:45:17.317298    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.317319 kubelet[2205]: W1213 03:45:17.317314    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.317415 kubelet[2205]: E1213 03:45:17.317356    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.317532 kubelet[2205]: E1213 03:45:17.317501    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.317532 kubelet[2205]: W1213 03:45:17.317518    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.317532 kubelet[2205]: E1213 03:45:17.317533    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.317686 kubelet[2205]: E1213 03:45:17.317667    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.317686 kubelet[2205]: W1213 03:45:17.317682    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.317777 kubelet[2205]: E1213 03:45:17.317696    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.317843 kubelet[2205]: E1213 03:45:17.317825    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.317843 kubelet[2205]: W1213 03:45:17.317841    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.317979 kubelet[2205]: E1213 03:45:17.317855    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.318078 kubelet[2205]: E1213 03:45:17.318039    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.318078 kubelet[2205]: W1213 03:45:17.318049    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.318078 kubelet[2205]: E1213 03:45:17.318061    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.318216 kubelet[2205]: E1213 03:45:17.318198    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.318216 kubelet[2205]: W1213 03:45:17.318209    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.318275 kubelet[2205]: E1213 03:45:17.318222    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.318417 kubelet[2205]: E1213 03:45:17.318397    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.318417 kubelet[2205]: W1213 03:45:17.318412    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.318505 kubelet[2205]: E1213 03:45:17.318426    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.318634 kubelet[2205]: E1213 03:45:17.318559    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.318634 kubelet[2205]: W1213 03:45:17.318574    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.318634 kubelet[2205]: E1213 03:45:17.318589    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.318901 kubelet[2205]: E1213 03:45:17.318746    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.318901 kubelet[2205]: W1213 03:45:17.318761    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.318901 kubelet[2205]: E1213 03:45:17.318775    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.321128 kubelet[2205]: E1213 03:45:17.321108    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.321273 kubelet[2205]: W1213 03:45:17.321257    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.321406 kubelet[2205]: E1213 03:45:17.321391    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.321738 kubelet[2205]: E1213 03:45:17.321707    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.321738 kubelet[2205]: W1213 03:45:17.321726    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.321830 kubelet[2205]: E1213 03:45:17.321804    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.322650 kubelet[2205]: E1213 03:45:17.322094    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.322650 kubelet[2205]: W1213 03:45:17.322109    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.322650 kubelet[2205]: E1213 03:45:17.322152    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.322650 kubelet[2205]: E1213 03:45:17.322379    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.322650 kubelet[2205]: W1213 03:45:17.322389    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.322650 kubelet[2205]: E1213 03:45:17.322406    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.322650 kubelet[2205]: E1213 03:45:17.322603    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.322650 kubelet[2205]: W1213 03:45:17.322612    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.322650 kubelet[2205]: E1213 03:45:17.322628    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.322950 kubelet[2205]: E1213 03:45:17.322801    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.322950 kubelet[2205]: W1213 03:45:17.322810    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.322950 kubelet[2205]: E1213 03:45:17.322883    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.323045 kubelet[2205]: E1213 03:45:17.322970    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.323045 kubelet[2205]: W1213 03:45:17.322978    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.323104 kubelet[2205]: E1213 03:45:17.323046    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.323142 kubelet[2205]: E1213 03:45:17.323131    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.323142 kubelet[2205]: W1213 03:45:17.323140    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.323231 kubelet[2205]: E1213 03:45:17.323206    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.323390 kubelet[2205]: E1213 03:45:17.323294    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.323390 kubelet[2205]: W1213 03:45:17.323310    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.323512 kubelet[2205]: E1213 03:45:17.323413    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.326566 kubelet[2205]: E1213 03:45:17.323609    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.326566 kubelet[2205]: W1213 03:45:17.323649    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.326566 kubelet[2205]: E1213 03:45:17.323755    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.326566 kubelet[2205]: E1213 03:45:17.323929    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.326566 kubelet[2205]: W1213 03:45:17.323939    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.326566 kubelet[2205]: E1213 03:45:17.323956    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.326566 kubelet[2205]: E1213 03:45:17.324219    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.326566 kubelet[2205]: W1213 03:45:17.324228    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.326566 kubelet[2205]: E1213 03:45:17.324273    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.326566 kubelet[2205]: E1213 03:45:17.324474    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.326939 kubelet[2205]: W1213 03:45:17.324483    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.326939 kubelet[2205]: E1213 03:45:17.324540    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.326939 kubelet[2205]: E1213 03:45:17.324728    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.326939 kubelet[2205]: W1213 03:45:17.324737    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.326939 kubelet[2205]: E1213 03:45:17.324807    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.326939 kubelet[2205]: E1213 03:45:17.324993    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.326939 kubelet[2205]: W1213 03:45:17.325004    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.326939 kubelet[2205]: E1213 03:45:17.325099    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.326939 kubelet[2205]: E1213 03:45:17.325228    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.326939 kubelet[2205]: W1213 03:45:17.325263    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.327204 kubelet[2205]: E1213 03:45:17.325874    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.327204 kubelet[2205]: E1213 03:45:17.326046    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.327204 kubelet[2205]: W1213 03:45:17.326056    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.327204 kubelet[2205]: E1213 03:45:17.326072    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.327204 kubelet[2205]: E1213 03:45:17.326255    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.327204 kubelet[2205]: W1213 03:45:17.326265    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.327204 kubelet[2205]: E1213 03:45:17.326306    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.327204 kubelet[2205]: E1213 03:45:17.326533    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.327204 kubelet[2205]: W1213 03:45:17.326543    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.327204 kubelet[2205]: E1213 03:45:17.326632    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.327204 kubelet[2205]: E1213 03:45:17.327014    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.327537 kubelet[2205]: W1213 03:45:17.327024    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.327537 kubelet[2205]: E1213 03:45:17.327073    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.327537 kubelet[2205]: E1213 03:45:17.327262    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.327537 kubelet[2205]: W1213 03:45:17.327272    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.327537 kubelet[2205]: E1213 03:45:17.327376    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.327537 kubelet[2205]: E1213 03:45:17.327531    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.327537 kubelet[2205]: W1213 03:45:17.327539    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.327777 kubelet[2205]: E1213 03:45:17.327556    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.327828 kubelet[2205]: E1213 03:45:17.327792    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.327828 kubelet[2205]: W1213 03:45:17.327802    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.327828 kubelet[2205]: E1213 03:45:17.327819    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.328176 kubelet[2205]: E1213 03:45:17.328132    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.328176 kubelet[2205]: W1213 03:45:17.328149    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.328176 kubelet[2205]: E1213 03:45:17.328166    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.336376 kubelet[2205]: E1213 03:45:17.328444    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.336376 kubelet[2205]: W1213 03:45:17.328459    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.336376 kubelet[2205]: E1213 03:45:17.328536    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.336376 kubelet[2205]: E1213 03:45:17.328714    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.336376 kubelet[2205]: W1213 03:45:17.328724    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.336376 kubelet[2205]: E1213 03:45:17.328803    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.336376 kubelet[2205]: E1213 03:45:17.328985    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.336376 kubelet[2205]: W1213 03:45:17.328995    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.336376 kubelet[2205]: E1213 03:45:17.329069    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.336376 kubelet[2205]: E1213 03:45:17.329236    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.336831 kubelet[2205]: W1213 03:45:17.329247    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.336831 kubelet[2205]: E1213 03:45:17.329260    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.336831 kubelet[2205]: E1213 03:45:17.329491    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.336831 kubelet[2205]: W1213 03:45:17.329501    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.336831 kubelet[2205]: E1213 03:45:17.329517    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.336831 kubelet[2205]: E1213 03:45:17.329675    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.336831 kubelet[2205]: W1213 03:45:17.329684    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.336831 kubelet[2205]: E1213 03:45:17.329714    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.336831 kubelet[2205]: E1213 03:45:17.329850    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.336831 kubelet[2205]: W1213 03:45:17.329858    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.337293 kubelet[2205]: E1213 03:45:17.329870    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.337293 kubelet[2205]: E1213 03:45:17.330011    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.337293 kubelet[2205]: W1213 03:45:17.330020    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.337293 kubelet[2205]: E1213 03:45:17.330032    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.337293 kubelet[2205]: E1213 03:45:17.330158    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.337293 kubelet[2205]: W1213 03:45:17.330167    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.337293 kubelet[2205]: E1213 03:45:17.330179    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.368371 kubelet[2205]: E1213 03:45:17.368255    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.368579 kubelet[2205]: W1213 03:45:17.368562    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.368669 kubelet[2205]: E1213 03:45:17.368658    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.396019 env[1255]: time="2024-12-13T03:45:17.395940526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 03:45:17.396019 env[1255]: time="2024-12-13T03:45:17.395987434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 03:45:17.396272 env[1255]: time="2024-12-13T03:45:17.396228839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 03:45:17.397249 env[1255]: time="2024-12-13T03:45:17.396534194Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c00f4d65eedee45151ed09d2731a5d4ea9fd593d206995083dad1c310e1077e pid=2618 runtime=io.containerd.runc.v2
Dec 13 03:45:17.400519 env[1255]: time="2024-12-13T03:45:17.400197895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pwl98,Uid:3c8837d2-6d78-4258-820f-673e264901b0,Namespace:calico-system,Attempt:0,}"
Dec 13 03:45:17.406592 kubelet[2205]: E1213 03:45:17.406298    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.406592 kubelet[2205]: W1213 03:45:17.406317    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.406592 kubelet[2205]: E1213 03:45:17.406360    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.406592 kubelet[2205]: I1213 03:45:17.406431    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e67c0431-ca4e-483a-b78f-aa6377b70035-kubelet-dir\") pod \"csi-node-driver-v5c5r\" (UID: \"e67c0431-ca4e-483a-b78f-aa6377b70035\") " pod="calico-system/csi-node-driver-v5c5r"
Dec 13 03:45:17.407017 kubelet[2205]: E1213 03:45:17.406872    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.407017 kubelet[2205]: W1213 03:45:17.406884    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.407017 kubelet[2205]: E1213 03:45:17.406901    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.407017 kubelet[2205]: I1213 03:45:17.406923    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e67c0431-ca4e-483a-b78f-aa6377b70035-socket-dir\") pod \"csi-node-driver-v5c5r\" (UID: \"e67c0431-ca4e-483a-b78f-aa6377b70035\") " pod="calico-system/csi-node-driver-v5c5r"
Dec 13 03:45:17.407504 kubelet[2205]: E1213 03:45:17.407365    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.407504 kubelet[2205]: W1213 03:45:17.407377    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.407504 kubelet[2205]: E1213 03:45:17.407393    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.407504 kubelet[2205]: I1213 03:45:17.407416    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e67c0431-ca4e-483a-b78f-aa6377b70035-registration-dir\") pod \"csi-node-driver-v5c5r\" (UID: \"e67c0431-ca4e-483a-b78f-aa6377b70035\") " pod="calico-system/csi-node-driver-v5c5r"
Dec 13 03:45:17.408250 kubelet[2205]: E1213 03:45:17.408095    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.408250 kubelet[2205]: W1213 03:45:17.408107    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.408250 kubelet[2205]: E1213 03:45:17.408124    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.408250 kubelet[2205]: I1213 03:45:17.408146    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7ktm\" (UniqueName: \"kubernetes.io/projected/e67c0431-ca4e-483a-b78f-aa6377b70035-kube-api-access-g7ktm\") pod \"csi-node-driver-v5c5r\" (UID: \"e67c0431-ca4e-483a-b78f-aa6377b70035\") " pod="calico-system/csi-node-driver-v5c5r"
Dec 13 03:45:17.409780 kubelet[2205]: E1213 03:45:17.408681    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.409780 kubelet[2205]: W1213 03:45:17.408695    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.409780 kubelet[2205]: E1213 03:45:17.408796    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.409780 kubelet[2205]: I1213 03:45:17.408824    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e67c0431-ca4e-483a-b78f-aa6377b70035-varrun\") pod \"csi-node-driver-v5c5r\" (UID: \"e67c0431-ca4e-483a-b78f-aa6377b70035\") " pod="calico-system/csi-node-driver-v5c5r"
Dec 13 03:45:17.409780 kubelet[2205]: E1213 03:45:17.408980    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.409780 kubelet[2205]: W1213 03:45:17.408990    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.409780 kubelet[2205]: E1213 03:45:17.409062    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.409780 kubelet[2205]: E1213 03:45:17.409158    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.409780 kubelet[2205]: W1213 03:45:17.409167    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.409780 kubelet[2205]: E1213 03:45:17.409239    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.410137 kubelet[2205]: E1213 03:45:17.409362    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.410137 kubelet[2205]: W1213 03:45:17.409372    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.410137 kubelet[2205]: E1213 03:45:17.409447    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.410137 kubelet[2205]: E1213 03:45:17.409539    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.410137 kubelet[2205]: W1213 03:45:17.409548    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.410137 kubelet[2205]: E1213 03:45:17.409570    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.410729 kubelet[2205]: E1213 03:45:17.410393    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.410729 kubelet[2205]: W1213 03:45:17.410405    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.410729 kubelet[2205]: E1213 03:45:17.410419    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.410729 kubelet[2205]: E1213 03:45:17.410632    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.410729 kubelet[2205]: W1213 03:45:17.410643    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.410729 kubelet[2205]: E1213 03:45:17.410656    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.411080 kubelet[2205]: E1213 03:45:17.411067    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.411159 kubelet[2205]: W1213 03:45:17.411147    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.411239 kubelet[2205]: E1213 03:45:17.411229    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.411495 kubelet[2205]: E1213 03:45:17.411482    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.412854 kubelet[2205]: W1213 03:45:17.412808    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.412989 kubelet[2205]: E1213 03:45:17.412977    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.413397 kubelet[2205]: E1213 03:45:17.413317    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.413496 kubelet[2205]: W1213 03:45:17.413483    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.413625 kubelet[2205]: E1213 03:45:17.413594    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.413905 kubelet[2205]: E1213 03:45:17.413894    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.414026 kubelet[2205]: W1213 03:45:17.414013    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.414141 kubelet[2205]: E1213 03:45:17.414112    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.471074 env[1255]: time="2024-12-13T03:45:17.470692504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 03:45:17.471074 env[1255]: time="2024-12-13T03:45:17.471031802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 03:45:17.471316 env[1255]: time="2024-12-13T03:45:17.471049927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 03:45:17.471430 env[1255]: time="2024-12-13T03:45:17.471393834Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd2a99fca59650ba3aeed46c06d652ed43e1a58257df64c89d9ae4d595e0e647 pid=2668 runtime=io.containerd.runc.v2
Dec 13 03:45:17.512009 kubelet[2205]: E1213 03:45:17.510908    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.512009 kubelet[2205]: W1213 03:45:17.510932    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.512009 kubelet[2205]: E1213 03:45:17.510958    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.512009 kubelet[2205]: E1213 03:45:17.511202    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.512009 kubelet[2205]: W1213 03:45:17.511212    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.512009 kubelet[2205]: E1213 03:45:17.511232    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.512009 kubelet[2205]: E1213 03:45:17.511497    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.512009 kubelet[2205]: W1213 03:45:17.511507    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.512009 kubelet[2205]: E1213 03:45:17.511522    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.512009 kubelet[2205]: E1213 03:45:17.511759    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.512817 kubelet[2205]: W1213 03:45:17.511769    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.512817 kubelet[2205]: E1213 03:45:17.511783    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.513239 kubelet[2205]: E1213 03:45:17.512960    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.513239 kubelet[2205]: W1213 03:45:17.512985    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.513239 kubelet[2205]: E1213 03:45:17.513007    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.514906 kubelet[2205]: E1213 03:45:17.514668    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.514906 kubelet[2205]: W1213 03:45:17.514682    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.514906 kubelet[2205]: E1213 03:45:17.514734    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.516176 kubelet[2205]: E1213 03:45:17.515237    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.516176 kubelet[2205]: W1213 03:45:17.515249    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.516176 kubelet[2205]: E1213 03:45:17.515352    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.516176 kubelet[2205]: E1213 03:45:17.515506    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.516176 kubelet[2205]: W1213 03:45:17.515514    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.516176 kubelet[2205]: E1213 03:45:17.515562    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.516176 kubelet[2205]: E1213 03:45:17.516124    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.516176 kubelet[2205]: W1213 03:45:17.516135    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.522087 kubelet[2205]: E1213 03:45:17.516225    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.525562 kubelet[2205]: E1213 03:45:17.525511    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.527697 kubelet[2205]: W1213 03:45:17.527564    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.527947 kubelet[2205]: E1213 03:45:17.527935    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.528071 kubelet[2205]: W1213 03:45:17.528040    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.528397 kubelet[2205]: E1213 03:45:17.528386    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.528503 kubelet[2205]: W1213 03:45:17.528491    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.528803 kubelet[2205]: E1213 03:45:17.528766    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.528898 kubelet[2205]: W1213 03:45:17.528885    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.529172 kubelet[2205]: E1213 03:45:17.529162    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.529274 kubelet[2205]: W1213 03:45:17.529262    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.529411 kubelet[2205]: E1213 03:45:17.529380    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.529684 kubelet[2205]: E1213 03:45:17.529673    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.529785 kubelet[2205]: W1213 03:45:17.529773    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.529894 kubelet[2205]: E1213 03:45:17.529884    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.530417 kubelet[2205]: E1213 03:45:17.530377    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.530825 kubelet[2205]: W1213 03:45:17.530811    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.530951 kubelet[2205]: E1213 03:45:17.530939    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.531163 kubelet[2205]: E1213 03:45:17.531130    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.531541 kubelet[2205]: E1213 03:45:17.531528    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.532262 kubelet[2205]: W1213 03:45:17.532174    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.532419 kubelet[2205]: E1213 03:45:17.532407    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.532858 kubelet[2205]: E1213 03:45:17.532833    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.532944 kubelet[2205]: W1213 03:45:17.532932    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.533801 kubelet[2205]: E1213 03:45:17.533788    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.534141 kubelet[2205]: E1213 03:45:17.534129    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.534233 kubelet[2205]: W1213 03:45:17.534220    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.534423 kubelet[2205]: E1213 03:45:17.534327    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.534671 kubelet[2205]: E1213 03:45:17.534656    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.534956 kubelet[2205]: E1213 03:45:17.534700    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.535070 kubelet[2205]: E1213 03:45:17.534721    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.535276 kubelet[2205]: E1213 03:45:17.535265    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.535398 kubelet[2205]: W1213 03:45:17.535385    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.535513 kubelet[2205]: E1213 03:45:17.535503    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.535749 kubelet[2205]: E1213 03:45:17.535738    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.535831 kubelet[2205]: W1213 03:45:17.535819    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.535908 kubelet[2205]: E1213 03:45:17.535899    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.536143 kubelet[2205]: E1213 03:45:17.536131    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.536227 kubelet[2205]: W1213 03:45:17.536215    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.536325 kubelet[2205]: E1213 03:45:17.536312    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.536784 kubelet[2205]: E1213 03:45:17.536772    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.536918 kubelet[2205]: W1213 03:45:17.536880    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.537678 kubelet[2205]: E1213 03:45:17.537665    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.538469 kubelet[2205]: E1213 03:45:17.538443    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.538567 kubelet[2205]: W1213 03:45:17.538469    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.538567 kubelet[2205]: E1213 03:45:17.538530    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.544589 kubelet[2205]: E1213 03:45:17.543494    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.544589 kubelet[2205]: W1213 03:45:17.543512    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.544589 kubelet[2205]: E1213 03:45:17.543530    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.556015 kubelet[2205]: E1213 03:45:17.555992    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:17.556209 kubelet[2205]: W1213 03:45:17.556193    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:17.556305 kubelet[2205]: E1213 03:45:17.556295    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:17.566391 env[1255]: time="2024-12-13T03:45:17.564482245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-867954bcfd-jlm5s,Uid:b4413df6-49b9-4c93-8564-da9a9503da3b,Namespace:calico-system,Attempt:0,} returns sandbox id \"5c00f4d65eedee45151ed09d2731a5d4ea9fd593d206995083dad1c310e1077e\""
Dec 13 03:45:17.568107 env[1255]: time="2024-12-13T03:45:17.568059735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\""
Dec 13 03:45:17.592429 env[1255]: time="2024-12-13T03:45:17.592383516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pwl98,Uid:3c8837d2-6d78-4258-820f-673e264901b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd2a99fca59650ba3aeed46c06d652ed43e1a58257df64c89d9ae4d595e0e647\""
Dec 13 03:45:17.845000 audit[2739]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2739 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:17.845000 audit[2739]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7ffd8278f4d0 a2=0 a3=7ffd8278f4bc items=0 ppid=2382 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:17.845000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:17.850000 audit[2739]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2739 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:17.850000 audit[2739]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd8278f4d0 a2=0 a3=0 items=0 ppid=2382 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:17.850000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:18.834699 kubelet[2205]: E1213 03:45:18.834624    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v5c5r" podUID="e67c0431-ca4e-483a-b78f-aa6377b70035"
Dec 13 03:45:19.377020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3149239594.mount: Deactivated successfully.
Dec 13 03:45:20.834946 kubelet[2205]: E1213 03:45:20.834915    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v5c5r" podUID="e67c0431-ca4e-483a-b78f-aa6377b70035"
Dec 13 03:45:21.826684 env[1255]: time="2024-12-13T03:45:21.824966491Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:21.828143 env[1255]: time="2024-12-13T03:45:21.828038946Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:21.832790 env[1255]: time="2024-12-13T03:45:21.832703710Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:21.837787 env[1255]: time="2024-12-13T03:45:21.837698434Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:21.840800 env[1255]: time="2024-12-13T03:45:21.839306893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\""
Dec 13 03:45:21.864992 env[1255]: time="2024-12-13T03:45:21.849394317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\""
Dec 13 03:45:21.881792 env[1255]: time="2024-12-13T03:45:21.881741464Z" level=info msg="CreateContainer within sandbox \"5c00f4d65eedee45151ed09d2731a5d4ea9fd593d206995083dad1c310e1077e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}"
Dec 13 03:45:21.908755 env[1255]: time="2024-12-13T03:45:21.908694636Z" level=info msg="CreateContainer within sandbox \"5c00f4d65eedee45151ed09d2731a5d4ea9fd593d206995083dad1c310e1077e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"35cdc3a2b969ba79a0d707110feaff2756251c2d858f2424c11e0fab70e97564\""
Dec 13 03:45:21.909456 env[1255]: time="2024-12-13T03:45:21.909421754Z" level=info msg="StartContainer for \"35cdc3a2b969ba79a0d707110feaff2756251c2d858f2424c11e0fab70e97564\""
Dec 13 03:45:22.004023 env[1255]: time="2024-12-13T03:45:22.003989168Z" level=info msg="StartContainer for \"35cdc3a2b969ba79a0d707110feaff2756251c2d858f2424c11e0fab70e97564\" returns successfully"
Dec 13 03:45:22.835169 kubelet[2205]: E1213 03:45:22.835117    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v5c5r" podUID="e67c0431-ca4e-483a-b78f-aa6377b70035"
Dec 13 03:45:23.076264 kubelet[2205]: E1213 03:45:23.076220    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.076640 kubelet[2205]: W1213 03:45:23.076607    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.076828 kubelet[2205]: E1213 03:45:23.076802    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.077552 kubelet[2205]: E1213 03:45:23.077526    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.077735 kubelet[2205]: W1213 03:45:23.077707    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.077904 kubelet[2205]: E1213 03:45:23.077881    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.078453 kubelet[2205]: E1213 03:45:23.078429    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.078637 kubelet[2205]: W1213 03:45:23.078610    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.078864 kubelet[2205]: E1213 03:45:23.078842    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.079327 kubelet[2205]: E1213 03:45:23.079302    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.079550 kubelet[2205]: W1213 03:45:23.079522    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.079712 kubelet[2205]: E1213 03:45:23.079690    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.080193 kubelet[2205]: E1213 03:45:23.080170    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.080419 kubelet[2205]: W1213 03:45:23.080389    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.080648 kubelet[2205]: E1213 03:45:23.080625    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.081076 kubelet[2205]: E1213 03:45:23.081053    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.081245 kubelet[2205]: W1213 03:45:23.081220    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.081470 kubelet[2205]: E1213 03:45:23.081446    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.081996 kubelet[2205]: E1213 03:45:23.081970    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.082182 kubelet[2205]: W1213 03:45:23.082156    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.082413 kubelet[2205]: E1213 03:45:23.082322    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.082958 kubelet[2205]: E1213 03:45:23.082934    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.083127 kubelet[2205]: W1213 03:45:23.083102    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.083289 kubelet[2205]: E1213 03:45:23.083268    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.084024 kubelet[2205]: E1213 03:45:23.084000    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.084209 kubelet[2205]: W1213 03:45:23.084184    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.084439 kubelet[2205]: E1213 03:45:23.084412    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.084985 kubelet[2205]: E1213 03:45:23.084961    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.085157 kubelet[2205]: W1213 03:45:23.085131    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.087698 kubelet[2205]: E1213 03:45:23.085296    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.088471 kubelet[2205]: E1213 03:45:23.088444    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.088662 kubelet[2205]: W1213 03:45:23.088634    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.088826 kubelet[2205]: E1213 03:45:23.088804    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.089439 kubelet[2205]: E1213 03:45:23.089318    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.089626 kubelet[2205]: W1213 03:45:23.089598    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.089785 kubelet[2205]: E1213 03:45:23.089764    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.090319 kubelet[2205]: E1213 03:45:23.090294    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.090559 kubelet[2205]: W1213 03:45:23.090530    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.090718 kubelet[2205]: E1213 03:45:23.090697    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.091226 kubelet[2205]: E1213 03:45:23.091200    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.091485 kubelet[2205]: W1213 03:45:23.091456    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.091664 kubelet[2205]: E1213 03:45:23.091642    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.092205 kubelet[2205]: E1213 03:45:23.092178    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.092446 kubelet[2205]: W1213 03:45:23.092419    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.092627 kubelet[2205]: E1213 03:45:23.092605    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.093236 kubelet[2205]: E1213 03:45:23.093211    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.093554 kubelet[2205]: W1213 03:45:23.093462    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.093668 kubelet[2205]: E1213 03:45:23.093588    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.094240 kubelet[2205]: E1213 03:45:23.094177    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.094240 kubelet[2205]: W1213 03:45:23.094218    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.094526 kubelet[2205]: E1213 03:45:23.094266    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.094827 kubelet[2205]: E1213 03:45:23.094787    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.094827 kubelet[2205]: W1213 03:45:23.094823    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.095028 kubelet[2205]: E1213 03:45:23.094864    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.095294 kubelet[2205]: E1213 03:45:23.095263    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.095294 kubelet[2205]: W1213 03:45:23.095293    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.095547 kubelet[2205]: E1213 03:45:23.095512    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.095825 kubelet[2205]: E1213 03:45:23.095795    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.095941 kubelet[2205]: W1213 03:45:23.095826    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.096023 kubelet[2205]: E1213 03:45:23.095988    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.096233 kubelet[2205]: E1213 03:45:23.096204    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.096233 kubelet[2205]: W1213 03:45:23.096232    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.096495 kubelet[2205]: E1213 03:45:23.096428    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.096679 kubelet[2205]: E1213 03:45:23.096650    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.096779 kubelet[2205]: W1213 03:45:23.096735    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.096862 kubelet[2205]: E1213 03:45:23.096838    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.097461 kubelet[2205]: E1213 03:45:23.097403    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.097461 kubelet[2205]: W1213 03:45:23.097435    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.097667 kubelet[2205]: E1213 03:45:23.097473    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.098106 kubelet[2205]: E1213 03:45:23.098051    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.098106 kubelet[2205]: W1213 03:45:23.098087    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.098439 kubelet[2205]: E1213 03:45:23.098403    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.098722 kubelet[2205]: E1213 03:45:23.098478    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.098722 kubelet[2205]: W1213 03:45:23.098711    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.099077 kubelet[2205]: E1213 03:45:23.099029    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.099077 kubelet[2205]: W1213 03:45:23.099063    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.099250 kubelet[2205]: E1213 03:45:23.099099    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.099555 kubelet[2205]: E1213 03:45:23.099527    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.099807 kubelet[2205]: E1213 03:45:23.099607    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.099963 kubelet[2205]: W1213 03:45:23.099936    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.100143 kubelet[2205]: E1213 03:45:23.100120    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.100700 kubelet[2205]: E1213 03:45:23.100645    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.100700 kubelet[2205]: W1213 03:45:23.100686    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.101009 kubelet[2205]: E1213 03:45:23.100742    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.101305 kubelet[2205]: E1213 03:45:23.101271    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.101305 kubelet[2205]: W1213 03:45:23.101303    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.101575 kubelet[2205]: E1213 03:45:23.101393    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.101885 kubelet[2205]: E1213 03:45:23.101832    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.101885 kubelet[2205]: W1213 03:45:23.101870    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.102170 kubelet[2205]: E1213 03:45:23.102140    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.102500 kubelet[2205]: E1213 03:45:23.102456    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.102500 kubelet[2205]: W1213 03:45:23.102493    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.102695 kubelet[2205]: E1213 03:45:23.102526    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.103681 kubelet[2205]: E1213 03:45:23.103635    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.103681 kubelet[2205]: W1213 03:45:23.103674    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.103888 kubelet[2205]: E1213 03:45:23.103709    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.104634 kubelet[2205]: E1213 03:45:23.104605    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:23.104797 kubelet[2205]: W1213 03:45:23.104772    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:23.104978 kubelet[2205]: E1213 03:45:23.104954    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:23.293519 kubelet[2205]: I1213 03:45:23.291253    2205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-867954bcfd-jlm5s" podStartSLOduration=3.016188992 podStartE2EDuration="7.29111112s" podCreationTimestamp="2024-12-13 03:45:16 +0000 UTC" firstStartedPulling="2024-12-13 03:45:17.567808171 +0000 UTC m=+20.346408203" lastFinishedPulling="2024-12-13 03:45:21.841390216 +0000 UTC m=+24.621330331" observedRunningTime="2024-12-13 03:45:23.289692549 +0000 UTC m=+26.068292631" watchObservedRunningTime="2024-12-13 03:45:23.29111112 +0000 UTC m=+26.069711202"
Dec 13 03:45:24.014399 kubelet[2205]: I1213 03:45:24.014123    2205 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Dec 13 03:45:24.103000 kubelet[2205]: E1213 03:45:24.102959    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.103000 kubelet[2205]: W1213 03:45:24.102999    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.103223 kubelet[2205]: E1213 03:45:24.103072    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.103455 kubelet[2205]: E1213 03:45:24.103428    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.103509 kubelet[2205]: W1213 03:45:24.103457    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.103509 kubelet[2205]: E1213 03:45:24.103487    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.103825 kubelet[2205]: E1213 03:45:24.103801    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.103871 kubelet[2205]: W1213 03:45:24.103828    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.103871 kubelet[2205]: E1213 03:45:24.103857    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.104180 kubelet[2205]: E1213 03:45:24.104154    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.104223 kubelet[2205]: W1213 03:45:24.104181    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.104223 kubelet[2205]: E1213 03:45:24.104209    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.104616 kubelet[2205]: E1213 03:45:24.104590    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.104664 kubelet[2205]: W1213 03:45:24.104619    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.104664 kubelet[2205]: E1213 03:45:24.104652    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.104973 kubelet[2205]: E1213 03:45:24.104949    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.105018 kubelet[2205]: W1213 03:45:24.104977    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.105018 kubelet[2205]: E1213 03:45:24.105005    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.105318 kubelet[2205]: E1213 03:45:24.105295    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.105389 kubelet[2205]: W1213 03:45:24.105321    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.105426 kubelet[2205]: E1213 03:45:24.105387    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.105777 kubelet[2205]: E1213 03:45:24.105752    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.105827 kubelet[2205]: W1213 03:45:24.105780    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.105827 kubelet[2205]: E1213 03:45:24.105809    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.106196 kubelet[2205]: E1213 03:45:24.106171    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.106243 kubelet[2205]: W1213 03:45:24.106199    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.106243 kubelet[2205]: E1213 03:45:24.106227    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.106694 kubelet[2205]: E1213 03:45:24.106636    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.106694 kubelet[2205]: W1213 03:45:24.106669    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.106780 kubelet[2205]: E1213 03:45:24.106699    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.107023 kubelet[2205]: E1213 03:45:24.106998    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.107072 kubelet[2205]: W1213 03:45:24.107025    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.107072 kubelet[2205]: E1213 03:45:24.107052    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.107401 kubelet[2205]: E1213 03:45:24.107376    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.107467 kubelet[2205]: W1213 03:45:24.107403    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.107467 kubelet[2205]: E1213 03:45:24.107435    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.107820 kubelet[2205]: E1213 03:45:24.107794    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.107868 kubelet[2205]: W1213 03:45:24.107823    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.107868 kubelet[2205]: E1213 03:45:24.107852    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.108180 kubelet[2205]: E1213 03:45:24.108156    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.108225 kubelet[2205]: W1213 03:45:24.108183    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.108225 kubelet[2205]: E1213 03:45:24.108211    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.108598 kubelet[2205]: E1213 03:45:24.108574    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.108646 kubelet[2205]: W1213 03:45:24.108601    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.108646 kubelet[2205]: E1213 03:45:24.108631    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.203181 kubelet[2205]: E1213 03:45:24.203144    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.203181 kubelet[2205]: W1213 03:45:24.203165    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.203181 kubelet[2205]: E1213 03:45:24.203186    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.205292 kubelet[2205]: E1213 03:45:24.203932    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.205292 kubelet[2205]: W1213 03:45:24.203947    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.205292 kubelet[2205]: E1213 03:45:24.203969    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.205292 kubelet[2205]: E1213 03:45:24.204367    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.205292 kubelet[2205]: W1213 03:45:24.204402    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.205292 kubelet[2205]: E1213 03:45:24.204460    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.205292 kubelet[2205]: E1213 03:45:24.204780    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.205292 kubelet[2205]: W1213 03:45:24.204800    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.205292 kubelet[2205]: E1213 03:45:24.204848    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.205292 kubelet[2205]: E1213 03:45:24.205149    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.206380 kubelet[2205]: W1213 03:45:24.205170    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.206380 kubelet[2205]: E1213 03:45:24.205214    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.206380 kubelet[2205]: E1213 03:45:24.206294    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.206380 kubelet[2205]: W1213 03:45:24.206306    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.206380 kubelet[2205]: E1213 03:45:24.206326    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.207014 kubelet[2205]: E1213 03:45:24.206990    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.207014 kubelet[2205]: W1213 03:45:24.207004    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.207643 kubelet[2205]: E1213 03:45:24.207427    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.207643 kubelet[2205]: E1213 03:45:24.207639    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.207643 kubelet[2205]: W1213 03:45:24.207649    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.208169 kubelet[2205]: E1213 03:45:24.207963    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.208169 kubelet[2205]: E1213 03:45:24.208163    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.208169 kubelet[2205]: W1213 03:45:24.208173    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.209486 kubelet[2205]: E1213 03:45:24.208544    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.209486 kubelet[2205]: E1213 03:45:24.209385    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.209486 kubelet[2205]: W1213 03:45:24.209395    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.209486 kubelet[2205]: E1213 03:45:24.209416    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.210081 kubelet[2205]: E1213 03:45:24.210043    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.210081 kubelet[2205]: W1213 03:45:24.210057    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.210600 kubelet[2205]: E1213 03:45:24.210302    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.210600 kubelet[2205]: E1213 03:45:24.210596    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.210600 kubelet[2205]: W1213 03:45:24.210607    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.211094 kubelet[2205]: E1213 03:45:24.210911    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.211238 kubelet[2205]: E1213 03:45:24.211113    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.211238 kubelet[2205]: W1213 03:45:24.211123    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.211728 kubelet[2205]: E1213 03:45:24.211505    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.211728 kubelet[2205]: E1213 03:45:24.211725    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.211728 kubelet[2205]: W1213 03:45:24.211735    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.211996 kubelet[2205]: E1213 03:45:24.211756    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.214576 kubelet[2205]: E1213 03:45:24.214537    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.214576 kubelet[2205]: W1213 03:45:24.214552    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.214875 kubelet[2205]: E1213 03:45:24.214845    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.215747 kubelet[2205]: E1213 03:45:24.215715    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.215747 kubelet[2205]: W1213 03:45:24.215731    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.215942 kubelet[2205]: E1213 03:45:24.215766    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.216216 kubelet[2205]: E1213 03:45:24.216185    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.216216 kubelet[2205]: W1213 03:45:24.216197    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.216216 kubelet[2205]: E1213 03:45:24.216210    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.216738 kubelet[2205]: E1213 03:45:24.216707    2205 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 03:45:24.216738 kubelet[2205]: W1213 03:45:24.216721    2205 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 03:45:24.216738 kubelet[2205]: E1213 03:45:24.216733    2205 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 03:45:24.289553 env[1255]: time="2024-12-13T03:45:24.289140114Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:24.294543 env[1255]: time="2024-12-13T03:45:24.294475808Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:24.296829 env[1255]: time="2024-12-13T03:45:24.296776117Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:24.299066 env[1255]: time="2024-12-13T03:45:24.299003981Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:24.299581 env[1255]: time="2024-12-13T03:45:24.299534559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\""
Dec 13 03:45:24.309921 env[1255]: time="2024-12-13T03:45:24.309867189Z" level=info msg="CreateContainer within sandbox \"bd2a99fca59650ba3aeed46c06d652ed43e1a58257df64c89d9ae4d595e0e647\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Dec 13 03:45:24.348308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028797230.mount: Deactivated successfully.
Dec 13 03:45:24.357158 env[1255]: time="2024-12-13T03:45:24.357081312Z" level=info msg="CreateContainer within sandbox \"bd2a99fca59650ba3aeed46c06d652ed43e1a58257df64c89d9ae4d595e0e647\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d46218d091d6870e7598ae039bcce2490e6515a99a71c3ee4b840793f84ed74c\""
Dec 13 03:45:24.359708 env[1255]: time="2024-12-13T03:45:24.359656450Z" level=info msg="StartContainer for \"d46218d091d6870e7598ae039bcce2490e6515a99a71c3ee4b840793f84ed74c\""
Dec 13 03:45:24.414967 systemd[1]: run-containerd-runc-k8s.io-d46218d091d6870e7598ae039bcce2490e6515a99a71c3ee4b840793f84ed74c-runc.vbxcAA.mount: Deactivated successfully.
Dec 13 03:45:24.463432 env[1255]: time="2024-12-13T03:45:24.463380625Z" level=info msg="StartContainer for \"d46218d091d6870e7598ae039bcce2490e6515a99a71c3ee4b840793f84ed74c\" returns successfully"
Dec 13 03:45:24.633653 env[1255]: time="2024-12-13T03:45:24.633522172Z" level=info msg="shim disconnected" id=d46218d091d6870e7598ae039bcce2490e6515a99a71c3ee4b840793f84ed74c
Dec 13 03:45:24.633653 env[1255]: time="2024-12-13T03:45:24.633574991Z" level=warning msg="cleaning up after shim disconnected" id=d46218d091d6870e7598ae039bcce2490e6515a99a71c3ee4b840793f84ed74c namespace=k8s.io
Dec 13 03:45:24.633653 env[1255]: time="2024-12-13T03:45:24.633586092Z" level=info msg="cleaning up dead shim"
Dec 13 03:45:24.643514 env[1255]: time="2024-12-13T03:45:24.643476910Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:45:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2894 runtime=io.containerd.runc.v2\n"
Dec 13 03:45:24.835055 kubelet[2205]: E1213 03:45:24.834983    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v5c5r" podUID="e67c0431-ca4e-483a-b78f-aa6377b70035"
Dec 13 03:45:25.049536 env[1255]: time="2024-12-13T03:45:25.043790212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\""
Dec 13 03:45:25.334947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d46218d091d6870e7598ae039bcce2490e6515a99a71c3ee4b840793f84ed74c-rootfs.mount: Deactivated successfully.
Dec 13 03:45:26.835041 kubelet[2205]: E1213 03:45:26.834680    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v5c5r" podUID="e67c0431-ca4e-483a-b78f-aa6377b70035"
Dec 13 03:45:28.837407 kubelet[2205]: E1213 03:45:28.836790    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v5c5r" podUID="e67c0431-ca4e-483a-b78f-aa6377b70035"
Dec 13 03:45:30.835403 kubelet[2205]: E1213 03:45:30.835296    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v5c5r" podUID="e67c0431-ca4e-483a-b78f-aa6377b70035"
Dec 13 03:45:32.834802 kubelet[2205]: E1213 03:45:32.834748    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v5c5r" podUID="e67c0431-ca4e-483a-b78f-aa6377b70035"
Dec 13 03:45:33.263208 env[1255]: time="2024-12-13T03:45:33.262835272Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:33.269164 env[1255]: time="2024-12-13T03:45:33.269097311Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:33.274147 env[1255]: time="2024-12-13T03:45:33.274070004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:33.279043 env[1255]: time="2024-12-13T03:45:33.278952988Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:33.281818 env[1255]: time="2024-12-13T03:45:33.280707008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\""
Dec 13 03:45:33.288012 env[1255]: time="2024-12-13T03:45:33.287940062Z" level=info msg="CreateContainer within sandbox \"bd2a99fca59650ba3aeed46c06d652ed43e1a58257df64c89d9ae4d595e0e647\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Dec 13 03:45:33.338269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4076760430.mount: Deactivated successfully.
Dec 13 03:45:33.341258 env[1255]: time="2024-12-13T03:45:33.341150526Z" level=info msg="CreateContainer within sandbox \"bd2a99fca59650ba3aeed46c06d652ed43e1a58257df64c89d9ae4d595e0e647\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6a56d05b43d1870e987f2df0b1355ff08b5d8d51d03670d09d326364a388226c\""
Dec 13 03:45:33.343667 env[1255]: time="2024-12-13T03:45:33.343531895Z" level=info msg="StartContainer for \"6a56d05b43d1870e987f2df0b1355ff08b5d8d51d03670d09d326364a388226c\""
Dec 13 03:45:33.412835 env[1255]: time="2024-12-13T03:45:33.412779272Z" level=info msg="StartContainer for \"6a56d05b43d1870e987f2df0b1355ff08b5d8d51d03670d09d326364a388226c\" returns successfully"
Dec 13 03:45:33.772283 kubelet[2205]: I1213 03:45:33.772226    2205 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Dec 13 03:45:34.485868 kernel: kauditd_printk_skb: 8 callbacks suppressed
Dec 13 03:45:34.486116 kernel: audit: type=1325 audit(1734061534.475:307): table=filter:95 family=2 entries=17 op=nft_register_rule pid=2950 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:34.475000 audit[2950]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=2950 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:34.475000 audit[2950]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe73328ba0 a2=0 a3=7ffe73328b8c items=0 ppid=2382 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:34.500378 kernel: audit: type=1300 audit(1734061534.475:307): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe73328ba0 a2=0 a3=7ffe73328b8c items=0 ppid=2382 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:34.475000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:34.508410 kernel: audit: type=1327 audit(1734061534.475:307): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:34.511000 audit[2950]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=2950 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:34.520514 kernel: audit: type=1325 audit(1734061534.511:308): table=nat:96 family=2 entries=19 op=nft_register_chain pid=2950 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:34.511000 audit[2950]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe73328ba0 a2=0 a3=7ffe73328b8c items=0 ppid=2382 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:34.534407 kernel: audit: type=1300 audit(1734061534.511:308): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe73328ba0 a2=0 a3=7ffe73328b8c items=0 ppid=2382 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:34.534560 kernel: audit: type=1327 audit(1734061534.511:308): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:34.511000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:34.834539 kubelet[2205]: E1213 03:45:34.834388    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v5c5r" podUID="e67c0431-ca4e-483a-b78f-aa6377b70035"
Dec 13 03:45:35.608241 env[1255]: time="2024-12-13T03:45:35.608088672Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 03:45:35.658475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a56d05b43d1870e987f2df0b1355ff08b5d8d51d03670d09d326364a388226c-rootfs.mount: Deactivated successfully.
Dec 13 03:45:35.664948 env[1255]: time="2024-12-13T03:45:35.664895554Z" level=info msg="shim disconnected" id=6a56d05b43d1870e987f2df0b1355ff08b5d8d51d03670d09d326364a388226c
Dec 13 03:45:35.664948 env[1255]: time="2024-12-13T03:45:35.664941431Z" level=warning msg="cleaning up after shim disconnected" id=6a56d05b43d1870e987f2df0b1355ff08b5d8d51d03670d09d326364a388226c namespace=k8s.io
Dec 13 03:45:35.664948 env[1255]: time="2024-12-13T03:45:35.664952191Z" level=info msg="cleaning up dead shim"
Dec 13 03:45:35.677420 env[1255]: time="2024-12-13T03:45:35.677387176Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:45:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2964 runtime=io.containerd.runc.v2\n"
Dec 13 03:45:35.679111 kubelet[2205]: I1213 03:45:35.678233    2205 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Dec 13 03:45:35.714071 kubelet[2205]: I1213 03:45:35.714042    2205 topology_manager.go:215] "Topology Admit Handler" podUID="2f24277a-80c8-4080-b170-650a6deabb6f" podNamespace="kube-system" podName="coredns-76f75df574-hv9jf"
Dec 13 03:45:35.721859 kubelet[2205]: I1213 03:45:35.721803    2205 topology_manager.go:215] "Topology Admit Handler" podUID="c5bfdb85-3d00-442f-80af-ff6cf6916b5c" podNamespace="kube-system" podName="coredns-76f75df574-xpw7r"
Dec 13 03:45:35.726631 kubelet[2205]: I1213 03:45:35.726605    2205 topology_manager.go:215] "Topology Admit Handler" podUID="fd36f021-0ca4-4c3d-84a7-4ab0c0604448" podNamespace="calico-apiserver" podName="calico-apiserver-574c4c684d-8pf6t"
Dec 13 03:45:35.730094 kubelet[2205]: I1213 03:45:35.730080    2205 topology_manager.go:215] "Topology Admit Handler" podUID="bc3d7731-763a-422a-9334-38f54b5935d1" podNamespace="calico-system" podName="calico-kube-controllers-59bc466bbc-4dbd6"
Dec 13 03:45:35.737044 kubelet[2205]: I1213 03:45:35.735042    2205 topology_manager.go:215] "Topology Admit Handler" podUID="35fca77a-f366-4df2-8058-b9331e4164b4" podNamespace="calico-apiserver" podName="calico-apiserver-574c4c684d-2dhzt"
Dec 13 03:45:35.907576 kubelet[2205]: I1213 03:45:35.906776    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgvkh\" (UniqueName: \"kubernetes.io/projected/fd36f021-0ca4-4c3d-84a7-4ab0c0604448-kube-api-access-vgvkh\") pod \"calico-apiserver-574c4c684d-8pf6t\" (UID: \"fd36f021-0ca4-4c3d-84a7-4ab0c0604448\") " pod="calico-apiserver/calico-apiserver-574c4c684d-8pf6t"
Dec 13 03:45:35.907576 kubelet[2205]: I1213 03:45:35.906822    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/35fca77a-f366-4df2-8058-b9331e4164b4-calico-apiserver-certs\") pod \"calico-apiserver-574c4c684d-2dhzt\" (UID: \"35fca77a-f366-4df2-8058-b9331e4164b4\") " pod="calico-apiserver/calico-apiserver-574c4c684d-2dhzt"
Dec 13 03:45:35.907576 kubelet[2205]: I1213 03:45:35.906851    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5rfr\" (UniqueName: \"kubernetes.io/projected/bc3d7731-763a-422a-9334-38f54b5935d1-kube-api-access-t5rfr\") pod \"calico-kube-controllers-59bc466bbc-4dbd6\" (UID: \"bc3d7731-763a-422a-9334-38f54b5935d1\") " pod="calico-system/calico-kube-controllers-59bc466bbc-4dbd6"
Dec 13 03:45:35.907576 kubelet[2205]: I1213 03:45:35.906878    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glr7p\" (UniqueName: \"kubernetes.io/projected/2f24277a-80c8-4080-b170-650a6deabb6f-kube-api-access-glr7p\") pod \"coredns-76f75df574-hv9jf\" (UID: \"2f24277a-80c8-4080-b170-650a6deabb6f\") " pod="kube-system/coredns-76f75df574-hv9jf"
Dec 13 03:45:35.907576 kubelet[2205]: I1213 03:45:35.906904    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fd36f021-0ca4-4c3d-84a7-4ab0c0604448-calico-apiserver-certs\") pod \"calico-apiserver-574c4c684d-8pf6t\" (UID: \"fd36f021-0ca4-4c3d-84a7-4ab0c0604448\") " pod="calico-apiserver/calico-apiserver-574c4c684d-8pf6t"
Dec 13 03:45:35.908047 kubelet[2205]: I1213 03:45:35.906926    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f24277a-80c8-4080-b170-650a6deabb6f-config-volume\") pod \"coredns-76f75df574-hv9jf\" (UID: \"2f24277a-80c8-4080-b170-650a6deabb6f\") " pod="kube-system/coredns-76f75df574-hv9jf"
Dec 13 03:45:35.908047 kubelet[2205]: I1213 03:45:35.906950    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc3d7731-763a-422a-9334-38f54b5935d1-tigera-ca-bundle\") pod \"calico-kube-controllers-59bc466bbc-4dbd6\" (UID: \"bc3d7731-763a-422a-9334-38f54b5935d1\") " pod="calico-system/calico-kube-controllers-59bc466bbc-4dbd6"
Dec 13 03:45:35.908047 kubelet[2205]: I1213 03:45:35.906986    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5bfdb85-3d00-442f-80af-ff6cf6916b5c-config-volume\") pod \"coredns-76f75df574-xpw7r\" (UID: \"c5bfdb85-3d00-442f-80af-ff6cf6916b5c\") " pod="kube-system/coredns-76f75df574-xpw7r"
Dec 13 03:45:35.908047 kubelet[2205]: I1213 03:45:35.907011    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjwsv\" (UniqueName: \"kubernetes.io/projected/c5bfdb85-3d00-442f-80af-ff6cf6916b5c-kube-api-access-cjwsv\") pod \"coredns-76f75df574-xpw7r\" (UID: \"c5bfdb85-3d00-442f-80af-ff6cf6916b5c\") " pod="kube-system/coredns-76f75df574-xpw7r"
Dec 13 03:45:35.908047 kubelet[2205]: I1213 03:45:35.907036    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m88vx\" (UniqueName: \"kubernetes.io/projected/35fca77a-f366-4df2-8058-b9331e4164b4-kube-api-access-m88vx\") pod \"calico-apiserver-574c4c684d-2dhzt\" (UID: \"35fca77a-f366-4df2-8058-b9331e4164b4\") " pod="calico-apiserver/calico-apiserver-574c4c684d-2dhzt"
Dec 13 03:45:36.096729 env[1255]: time="2024-12-13T03:45:36.096636105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\""
Dec 13 03:45:36.321322 env[1255]: time="2024-12-13T03:45:36.321241946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hv9jf,Uid:2f24277a-80c8-4080-b170-650a6deabb6f,Namespace:kube-system,Attempt:0,}"
Dec 13 03:45:36.325664 env[1255]: time="2024-12-13T03:45:36.324942194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xpw7r,Uid:c5bfdb85-3d00-442f-80af-ff6cf6916b5c,Namespace:kube-system,Attempt:0,}"
Dec 13 03:45:36.332082 env[1255]: time="2024-12-13T03:45:36.331942859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574c4c684d-8pf6t,Uid:fd36f021-0ca4-4c3d-84a7-4ab0c0604448,Namespace:calico-apiserver,Attempt:0,}"
Dec 13 03:45:36.339999 env[1255]: time="2024-12-13T03:45:36.339930189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574c4c684d-2dhzt,Uid:35fca77a-f366-4df2-8058-b9331e4164b4,Namespace:calico-apiserver,Attempt:0,}"
Dec 13 03:45:36.345031 env[1255]: time="2024-12-13T03:45:36.344965579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59bc466bbc-4dbd6,Uid:bc3d7731-763a-422a-9334-38f54b5935d1,Namespace:calico-system,Attempt:0,}"
Dec 13 03:45:36.781624 env[1255]: time="2024-12-13T03:45:36.781556567Z" level=error msg="Failed to destroy network for sandbox \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.784626 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf-shm.mount: Deactivated successfully.
Dec 13 03:45:36.785049 env[1255]: time="2024-12-13T03:45:36.785014098Z" level=error msg="encountered an error cleaning up failed sandbox \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.785192 env[1255]: time="2024-12-13T03:45:36.785154643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59bc466bbc-4dbd6,Uid:bc3d7731-763a-422a-9334-38f54b5935d1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.786267 kubelet[2205]: E1213 03:45:36.786198    2205 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.786378 kubelet[2205]: E1213 03:45:36.786360    2205 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59bc466bbc-4dbd6"
Dec 13 03:45:36.786432 kubelet[2205]: E1213 03:45:36.786390    2205 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59bc466bbc-4dbd6"
Dec 13 03:45:36.786492 kubelet[2205]: E1213 03:45:36.786473    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59bc466bbc-4dbd6_calico-system(bc3d7731-763a-422a-9334-38f54b5935d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59bc466bbc-4dbd6_calico-system(bc3d7731-763a-422a-9334-38f54b5935d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59bc466bbc-4dbd6" podUID="bc3d7731-763a-422a-9334-38f54b5935d1"
Dec 13 03:45:36.799610 env[1255]: time="2024-12-13T03:45:36.799454904Z" level=error msg="Failed to destroy network for sandbox \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.802197 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762-shm.mount: Deactivated successfully.
Dec 13 03:45:36.805099 env[1255]: time="2024-12-13T03:45:36.805043143Z" level=error msg="encountered an error cleaning up failed sandbox \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.805213 env[1255]: time="2024-12-13T03:45:36.805115279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574c4c684d-2dhzt,Uid:35fca77a-f366-4df2-8058-b9331e4164b4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.805392 kubelet[2205]: E1213 03:45:36.805370    2205 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.805478 kubelet[2205]: E1213 03:45:36.805441    2205 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-574c4c684d-2dhzt"
Dec 13 03:45:36.805544 kubelet[2205]: E1213 03:45:36.805467    2205 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-574c4c684d-2dhzt"
Dec 13 03:45:36.805624 kubelet[2205]: E1213 03:45:36.805576    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-574c4c684d-2dhzt_calico-apiserver(35fca77a-f366-4df2-8058-b9331e4164b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-574c4c684d-2dhzt_calico-apiserver(35fca77a-f366-4df2-8058-b9331e4164b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-574c4c684d-2dhzt" podUID="35fca77a-f366-4df2-8058-b9331e4164b4"
Dec 13 03:45:36.814820 env[1255]: time="2024-12-13T03:45:36.814759435Z" level=error msg="Failed to destroy network for sandbox \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.817732 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc-shm.mount: Deactivated successfully.
Dec 13 03:45:36.820023 env[1255]: time="2024-12-13T03:45:36.819972758Z" level=error msg="encountered an error cleaning up failed sandbox \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.820105 env[1255]: time="2024-12-13T03:45:36.820041146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hv9jf,Uid:2f24277a-80c8-4080-b170-650a6deabb6f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.820296 kubelet[2205]: E1213 03:45:36.820268    2205 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.820378 kubelet[2205]: E1213 03:45:36.820369    2205 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hv9jf"
Dec 13 03:45:36.821019 kubelet[2205]: E1213 03:45:36.820417    2205 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hv9jf"
Dec 13 03:45:36.821019 kubelet[2205]: E1213 03:45:36.820509    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-hv9jf_kube-system(2f24277a-80c8-4080-b170-650a6deabb6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-hv9jf_kube-system(2f24277a-80c8-4080-b170-650a6deabb6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hv9jf" podUID="2f24277a-80c8-4080-b170-650a6deabb6f"
Dec 13 03:45:36.823576 env[1255]: time="2024-12-13T03:45:36.823504049Z" level=error msg="Failed to destroy network for sandbox \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.826283 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79-shm.mount: Deactivated successfully.
Dec 13 03:45:36.828695 env[1255]: time="2024-12-13T03:45:36.828575135Z" level=error msg="encountered an error cleaning up failed sandbox \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.829381 env[1255]: time="2024-12-13T03:45:36.828727381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574c4c684d-8pf6t,Uid:fd36f021-0ca4-4c3d-84a7-4ab0c0604448,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.829553 kubelet[2205]: E1213 03:45:36.828997    2205 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.829553 kubelet[2205]: E1213 03:45:36.829070    2205 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-574c4c684d-8pf6t"
Dec 13 03:45:36.829553 kubelet[2205]: E1213 03:45:36.829103    2205 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-574c4c684d-8pf6t"
Dec 13 03:45:36.829790 kubelet[2205]: E1213 03:45:36.829186    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-574c4c684d-8pf6t_calico-apiserver(fd36f021-0ca4-4c3d-84a7-4ab0c0604448)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-574c4c684d-8pf6t_calico-apiserver(fd36f021-0ca4-4c3d-84a7-4ab0c0604448)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-574c4c684d-8pf6t" podUID="fd36f021-0ca4-4c3d-84a7-4ab0c0604448"
Dec 13 03:45:36.842419 env[1255]: time="2024-12-13T03:45:36.842356821Z" level=error msg="Failed to destroy network for sandbox \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.842843 env[1255]: time="2024-12-13T03:45:36.842804082Z" level=error msg="encountered an error cleaning up failed sandbox \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.842903 env[1255]: time="2024-12-13T03:45:36.842862252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xpw7r,Uid:c5bfdb85-3d00-442f-80af-ff6cf6916b5c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.843118 kubelet[2205]: E1213 03:45:36.843099    2205 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.843257 kubelet[2205]: E1213 03:45:36.843246    2205 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xpw7r"
Dec 13 03:45:36.843362 kubelet[2205]: E1213 03:45:36.843349    2205 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xpw7r"
Dec 13 03:45:36.843479 kubelet[2205]: E1213 03:45:36.843468    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xpw7r_kube-system(c5bfdb85-3d00-442f-80af-ff6cf6916b5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xpw7r_kube-system(c5bfdb85-3d00-442f-80af-ff6cf6916b5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xpw7r" podUID="c5bfdb85-3d00-442f-80af-ff6cf6916b5c"
Dec 13 03:45:36.847667 env[1255]: time="2024-12-13T03:45:36.847612826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v5c5r,Uid:e67c0431-ca4e-483a-b78f-aa6377b70035,Namespace:calico-system,Attempt:0,}"
Dec 13 03:45:36.913493 env[1255]: time="2024-12-13T03:45:36.913442878Z" level=error msg="Failed to destroy network for sandbox \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.913960 env[1255]: time="2024-12-13T03:45:36.913898435Z" level=error msg="encountered an error cleaning up failed sandbox \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.914106 env[1255]: time="2024-12-13T03:45:36.914066992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v5c5r,Uid:e67c0431-ca4e-483a-b78f-aa6377b70035,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.914481 kubelet[2205]: E1213 03:45:36.914455    2205 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:36.914787 kubelet[2205]: E1213 03:45:36.914533    2205 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v5c5r"
Dec 13 03:45:36.914787 kubelet[2205]: E1213 03:45:36.914561    2205 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v5c5r"
Dec 13 03:45:36.914787 kubelet[2205]: E1213 03:45:36.914639    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v5c5r_calico-system(e67c0431-ca4e-483a-b78f-aa6377b70035)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v5c5r_calico-system(e67c0431-ca4e-483a-b78f-aa6377b70035)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v5c5r" podUID="e67c0431-ca4e-483a-b78f-aa6377b70035"
Dec 13 03:45:37.102311 kubelet[2205]: I1213 03:45:37.101295    2205 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457"
Dec 13 03:45:37.107296 env[1255]: time="2024-12-13T03:45:37.107204974Z" level=info msg="StopPodSandbox for \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\""
Dec 13 03:45:37.116577 env[1255]: time="2024-12-13T03:45:37.111635897Z" level=info msg="StopPodSandbox for \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\""
Dec 13 03:45:37.116577 env[1255]: time="2024-12-13T03:45:37.115590492Z" level=info msg="StopPodSandbox for \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\""
Dec 13 03:45:37.116779 kubelet[2205]: I1213 03:45:37.110125    2205 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b"
Dec 13 03:45:37.116779 kubelet[2205]: I1213 03:45:37.114398    2205 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc"
Dec 13 03:45:37.118437 kubelet[2205]: I1213 03:45:37.117713    2205 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79"
Dec 13 03:45:37.118733 env[1255]: time="2024-12-13T03:45:37.118626471Z" level=info msg="StopPodSandbox for \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\""
Dec 13 03:45:37.122598 kubelet[2205]: I1213 03:45:37.121774    2205 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762"
Dec 13 03:45:37.123322 env[1255]: time="2024-12-13T03:45:37.123240587Z" level=info msg="StopPodSandbox for \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\""
Dec 13 03:45:37.130628 kubelet[2205]: I1213 03:45:37.130563    2205 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf"
Dec 13 03:45:37.133437 env[1255]: time="2024-12-13T03:45:37.133361318Z" level=info msg="StopPodSandbox for \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\""
Dec 13 03:45:37.240621 env[1255]: time="2024-12-13T03:45:37.240567566Z" level=error msg="StopPodSandbox for \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\" failed" error="failed to destroy network for sandbox \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:37.241023 env[1255]: time="2024-12-13T03:45:37.240982476Z" level=error msg="StopPodSandbox for \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\" failed" error="failed to destroy network for sandbox \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:37.241099 kubelet[2205]: E1213 03:45:37.241069    2205 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79"
Dec 13 03:45:37.241188 kubelet[2205]: E1213 03:45:37.241171    2205 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79"}
Dec 13 03:45:37.241252 kubelet[2205]: E1213 03:45:37.241237    2205 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd36f021-0ca4-4c3d-84a7-4ab0c0604448\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 03:45:37.241375 kubelet[2205]: E1213 03:45:37.241287    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd36f021-0ca4-4c3d-84a7-4ab0c0604448\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-574c4c684d-8pf6t" podUID="fd36f021-0ca4-4c3d-84a7-4ab0c0604448"
Dec 13 03:45:37.241601 kubelet[2205]: E1213 03:45:37.241567    2205 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc"
Dec 13 03:45:37.241652 kubelet[2205]: E1213 03:45:37.241611    2205 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc"}
Dec 13 03:45:37.241652 kubelet[2205]: E1213 03:45:37.241645    2205 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2f24277a-80c8-4080-b170-650a6deabb6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 03:45:37.241732 kubelet[2205]: E1213 03:45:37.241688    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2f24277a-80c8-4080-b170-650a6deabb6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hv9jf" podUID="2f24277a-80c8-4080-b170-650a6deabb6f"
Dec 13 03:45:37.249733 env[1255]: time="2024-12-13T03:45:37.249683808Z" level=error msg="StopPodSandbox for \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\" failed" error="failed to destroy network for sandbox \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:37.250201 kubelet[2205]: E1213 03:45:37.250169    2205 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457"
Dec 13 03:45:37.250299 kubelet[2205]: E1213 03:45:37.250228    2205 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457"}
Dec 13 03:45:37.250299 kubelet[2205]: E1213 03:45:37.250267    2205 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e67c0431-ca4e-483a-b78f-aa6377b70035\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 03:45:37.250427 kubelet[2205]: E1213 03:45:37.250319    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e67c0431-ca4e-483a-b78f-aa6377b70035\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v5c5r" podUID="e67c0431-ca4e-483a-b78f-aa6377b70035"
Dec 13 03:45:37.278048 env[1255]: time="2024-12-13T03:45:37.277993441Z" level=error msg="StopPodSandbox for \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\" failed" error="failed to destroy network for sandbox \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:37.278690 kubelet[2205]: E1213 03:45:37.278509    2205 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf"
Dec 13 03:45:37.278690 kubelet[2205]: E1213 03:45:37.278557    2205 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf"}
Dec 13 03:45:37.278690 kubelet[2205]: E1213 03:45:37.278622    2205 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc3d7731-763a-422a-9334-38f54b5935d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 03:45:37.278690 kubelet[2205]: E1213 03:45:37.278667    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc3d7731-763a-422a-9334-38f54b5935d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59bc466bbc-4dbd6" podUID="bc3d7731-763a-422a-9334-38f54b5935d1"
Dec 13 03:45:37.283885 env[1255]: time="2024-12-13T03:45:37.283842950Z" level=error msg="StopPodSandbox for \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\" failed" error="failed to destroy network for sandbox \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:37.284251 kubelet[2205]: E1213 03:45:37.284228    2205 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b"
Dec 13 03:45:37.284318 kubelet[2205]: E1213 03:45:37.284291    2205 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b"}
Dec 13 03:45:37.284384 kubelet[2205]: E1213 03:45:37.284369    2205 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c5bfdb85-3d00-442f-80af-ff6cf6916b5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 03:45:37.284453 kubelet[2205]: E1213 03:45:37.284408    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c5bfdb85-3d00-442f-80af-ff6cf6916b5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xpw7r" podUID="c5bfdb85-3d00-442f-80af-ff6cf6916b5c"
Dec 13 03:45:37.289428 env[1255]: time="2024-12-13T03:45:37.289391233Z" level=error msg="StopPodSandbox for \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\" failed" error="failed to destroy network for sandbox \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 03:45:37.289731 kubelet[2205]: E1213 03:45:37.289698    2205 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762"
Dec 13 03:45:37.289798 kubelet[2205]: E1213 03:45:37.289758    2205 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762"}
Dec 13 03:45:37.289834 kubelet[2205]: E1213 03:45:37.289802    2205 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"35fca77a-f366-4df2-8058-b9331e4164b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 03:45:37.289905 kubelet[2205]: E1213 03:45:37.289858    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"35fca77a-f366-4df2-8058-b9331e4164b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-574c4c684d-2dhzt" podUID="35fca77a-f366-4df2-8058-b9331e4164b4"
Dec 13 03:45:37.660550 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b-shm.mount: Deactivated successfully.
Dec 13 03:45:47.511326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4180610335.mount: Deactivated successfully.
Dec 13 03:45:47.570809 env[1255]: time="2024-12-13T03:45:47.570757958Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:47.574260 env[1255]: time="2024-12-13T03:45:47.574219424Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:47.576901 env[1255]: time="2024-12-13T03:45:47.576872690Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:47.579148 env[1255]: time="2024-12-13T03:45:47.579109914Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:47.579848 env[1255]: time="2024-12-13T03:45:47.579806373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\""
Dec 13 03:45:47.674659 env[1255]: time="2024-12-13T03:45:47.674497217Z" level=info msg="CreateContainer within sandbox \"bd2a99fca59650ba3aeed46c06d652ed43e1a58257df64c89d9ae4d595e0e647\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Dec 13 03:45:47.697430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2697682849.mount: Deactivated successfully.
Dec 13 03:45:47.704014 env[1255]: time="2024-12-13T03:45:47.703954828Z" level=info msg="CreateContainer within sandbox \"bd2a99fca59650ba3aeed46c06d652ed43e1a58257df64c89d9ae4d595e0e647\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"adb7c7aca8ced17dd3b8a564c878cdfe4b8986b02ba5ebb010cc5f919d5cef10\""
Dec 13 03:45:47.708615 env[1255]: time="2024-12-13T03:45:47.707794304Z" level=info msg="StartContainer for \"adb7c7aca8ced17dd3b8a564c878cdfe4b8986b02ba5ebb010cc5f919d5cef10\""
Dec 13 03:45:47.818134 env[1255]: time="2024-12-13T03:45:47.818011267Z" level=info msg="StartContainer for \"adb7c7aca8ced17dd3b8a564c878cdfe4b8986b02ba5ebb010cc5f919d5cef10\" returns successfully"
Dec 13 03:45:48.015532 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Dec 13 03:45:48.016984 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Dec 13 03:45:48.247475 kubelet[2205]: I1213 03:45:48.247382    2205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-pwl98" podStartSLOduration=1.248435948 podStartE2EDuration="31.237128311s" podCreationTimestamp="2024-12-13 03:45:17 +0000 UTC" firstStartedPulling="2024-12-13 03:45:17.594497108 +0000 UTC m=+20.373097140" lastFinishedPulling="2024-12-13 03:45:47.583189471 +0000 UTC m=+50.361789503" observedRunningTime="2024-12-13 03:45:48.234232269 +0000 UTC m=+51.012832331" watchObservedRunningTime="2024-12-13 03:45:48.237128311 +0000 UTC m=+51.015728353"
Dec 13 03:45:48.836445 env[1255]: time="2024-12-13T03:45:48.836272282Z" level=info msg="StopPodSandbox for \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\""
Dec 13 03:45:49.064000 audit[3432]: AVC avc:  denied  { write } for  pid=3432 comm="tee" name="fd" dev="proc" ino=25536 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0
Dec 13 03:45:49.069450 kernel: audit: type=1400 audit(1734061549.064:309): avc:  denied  { write } for  pid=3432 comm="tee" name="fd" dev="proc" ino=25536 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0
Dec 13 03:45:49.064000 audit[3432]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe296e89f8 a2=241 a3=1b6 items=1 ppid=3392 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.064000 audit: CWD cwd="/etc/service/enabled/bird6/log"
Dec 13 03:45:49.077524 kernel: audit: type=1300 audit(1734061549.064:309): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe296e89f8 a2=241 a3=1b6 items=1 ppid=3392 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.077602 kernel: audit: type=1307 audit(1734061549.064:309): cwd="/etc/service/enabled/bird6/log"
Dec 13 03:45:49.064000 audit: PATH item=0 name="/dev/fd/63" inode=26488 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:45:49.084359 kernel: audit: type=1302 audit(1734061549.064:309): item=0 name="/dev/fd/63" inode=26488 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:45:49.064000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633
Dec 13 03:45:49.097424 kernel: audit: type=1327 audit(1734061549.064:309): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633
Dec 13 03:45:49.078000 audit[3425]: AVC avc:  denied  { write } for  pid=3425 comm="tee" name="fd" dev="proc" ino=25542 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0
Dec 13 03:45:49.104411 kernel: audit: type=1400 audit(1734061549.078:310): avc:  denied  { write } for  pid=3425 comm="tee" name="fd" dev="proc" ino=25542 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0
Dec 13 03:45:49.078000 audit[3425]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffba4139f8 a2=241 a3=1b6 items=1 ppid=3400 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.117417 kernel: audit: type=1300 audit(1734061549.078:310): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffba4139f8 a2=241 a3=1b6 items=1 ppid=3400 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.078000 audit: CWD cwd="/etc/service/enabled/felix/log"
Dec 13 03:45:49.124411 kernel: audit: type=1307 audit(1734061549.078:310): cwd="/etc/service/enabled/felix/log"
Dec 13 03:45:49.078000 audit: PATH item=0 name="/dev/fd/63" inode=26487 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:45:49.129517 kernel: audit: type=1302 audit(1734061549.078:310): item=0 name="/dev/fd/63" inode=26487 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:45:49.078000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633
Dec 13 03:45:49.135438 kernel: audit: type=1327 audit(1734061549.078:310): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633
Dec 13 03:45:49.092000 audit[3449]: AVC avc:  denied  { write } for  pid=3449 comm="tee" name="fd" dev="proc" ino=26503 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0
Dec 13 03:45:49.092000 audit[3449]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe1463f9f8 a2=241 a3=1b6 items=1 ppid=3394 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.092000 audit: CWD cwd="/etc/service/enabled/confd/log"
Dec 13 03:45:49.092000 audit: PATH item=0 name="/dev/fd/63" inode=25545 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:45:49.092000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633
Dec 13 03:45:49.138000 audit[3440]: AVC avc:  denied  { write } for  pid=3440 comm="tee" name="fd" dev="proc" ino=25555 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0
Dec 13 03:45:49.138000 audit[3440]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe056d79e9 a2=241 a3=1b6 items=1 ppid=3396 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.138000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log"
Dec 13 03:45:49.138000 audit: PATH item=0 name="/dev/fd/63" inode=26497 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:45:49.138000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633
Dec 13 03:45:49.167000 audit[3459]: AVC avc:  denied  { write } for  pid=3459 comm="tee" name="fd" dev="proc" ino=26628 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0
Dec 13 03:45:49.176189 systemd[1]: run-containerd-runc-k8s.io-adb7c7aca8ced17dd3b8a564c878cdfe4b8986b02ba5ebb010cc5f919d5cef10-runc.xwbz5r.mount: Deactivated successfully.
Dec 13 03:45:49.167000 audit[3459]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff0b77d9f9 a2=241 a3=1b6 items=1 ppid=3384 pid=3459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.167000 audit: CWD cwd="/etc/service/enabled/bird/log"
Dec 13 03:45:49.167000 audit: PATH item=0 name="/dev/fd/63" inode=25549 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:45:49.167000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633
Dec 13 03:45:49.190000 audit[3461]: AVC avc:  denied  { write } for  pid=3461 comm="tee" name="fd" dev="proc" ino=26632 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0
Dec 13 03:45:49.190000 audit[3461]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdaa3209e8 a2=241 a3=1b6 items=1 ppid=3386 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.190000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log"
Dec 13 03:45:49.190000 audit: PATH item=0 name="/dev/fd/63" inode=25552 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:45:49.190000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633
Dec 13 03:45:49.193000 audit[3463]: AVC avc:  denied  { write } for  pid=3463 comm="tee" name="fd" dev="proc" ino=26638 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0
Dec 13 03:45:49.193000 audit[3463]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcb6ea39fa a2=241 a3=1b6 items=1 ppid=3387 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.193000 audit: CWD cwd="/etc/service/enabled/cni/log"
Dec 13 03:45:49.193000 audit: PATH item=0 name="/dev/fd/63" inode=25559 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 03:45:49.193000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633
Dec 13 03:45:49.511724 systemd[1]: run-containerd-runc-k8s.io-adb7c7aca8ced17dd3b8a564c878cdfe4b8986b02ba5ebb010cc5f919d5cef10-runc.KiXPa0.mount: Deactivated successfully.
Dec 13 03:45:49.549741 env[1255]: 2024-12-13 03:45:49.029 [INFO][3359] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762"
Dec 13 03:45:49.549741 env[1255]: 2024-12-13 03:45:49.030 [INFO][3359] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" iface="eth0" netns="/var/run/netns/cni-7a19c44d-ceb0-1034-713a-2030d2c4c553"
Dec 13 03:45:49.549741 env[1255]: 2024-12-13 03:45:49.030 [INFO][3359] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" iface="eth0" netns="/var/run/netns/cni-7a19c44d-ceb0-1034-713a-2030d2c4c553"
Dec 13 03:45:49.549741 env[1255]: 2024-12-13 03:45:49.045 [INFO][3359] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" iface="eth0" netns="/var/run/netns/cni-7a19c44d-ceb0-1034-713a-2030d2c4c553"
Dec 13 03:45:49.549741 env[1255]: 2024-12-13 03:45:49.046 [INFO][3359] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762"
Dec 13 03:45:49.549741 env[1255]: 2024-12-13 03:45:49.046 [INFO][3359] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762"
Dec 13 03:45:49.549741 env[1255]: 2024-12-13 03:45:49.527 [INFO][3408] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" HandleID="k8s-pod-network.9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0"
Dec 13 03:45:49.549741 env[1255]: 2024-12-13 03:45:49.530 [INFO][3408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:49.549741 env[1255]: 2024-12-13 03:45:49.530 [INFO][3408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:49.549741 env[1255]: 2024-12-13 03:45:49.544 [WARNING][3408] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" HandleID="k8s-pod-network.9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0"
Dec 13 03:45:49.549741 env[1255]: 2024-12-13 03:45:49.545 [INFO][3408] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" HandleID="k8s-pod-network.9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0"
Dec 13 03:45:49.549741 env[1255]: 2024-12-13 03:45:49.546 [INFO][3408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:49.549741 env[1255]: 2024-12-13 03:45:49.548 [INFO][3359] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762"
Dec 13 03:45:49.553508 env[1255]: time="2024-12-13T03:45:49.552411528Z" level=info msg="TearDown network for sandbox \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\" successfully"
Dec 13 03:45:49.553508 env[1255]: time="2024-12-13T03:45:49.552458686Z" level=info msg="StopPodSandbox for \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\" returns successfully"
Dec 13 03:45:49.553508 env[1255]: time="2024-12-13T03:45:49.553172728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574c4c684d-2dhzt,Uid:35fca77a-f366-4df2-8058-b9331e4164b4,Namespace:calico-apiserver,Attempt:1,}"
Dec 13 03:45:49.552722 systemd[1]: run-netns-cni\x2d7a19c44d\x2dceb0\x2d1034\x2d713a\x2d2030d2c4c553.mount: Deactivated successfully.
Dec 13 03:45:49.570000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.570000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.570000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.570000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.570000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.570000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.570000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.570000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.570000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.570000 audit: BPF prog-id=10 op=LOAD
Dec 13 03:45:49.570000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc18d1d90 a2=98 a3=3 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.570000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.573000 audit: BPF prog-id=10 op=UNLOAD
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit: BPF prog-id=11 op=LOAD
Dec 13 03:45:49.576000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcc18d1b70 a2=74 a3=540051 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.576000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.576000 audit: BPF prog-id=11 op=UNLOAD
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.576000 audit: BPF prog-id=12 op=LOAD
Dec 13 03:45:49.576000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcc18d1ba0 a2=94 a3=2 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.576000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.588000 audit: BPF prog-id=12 op=UNLOAD
Dec 13 03:45:49.818000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.818000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.818000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.818000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.818000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.818000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.818000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.818000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.818000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.818000 audit: BPF prog-id=13 op=LOAD
Dec 13 03:45:49.818000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcc18d1a60 a2=40 a3=1 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.818000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.821000 audit: BPF prog-id=13 op=UNLOAD
Dec 13 03:45:49.821000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.821000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffcc18d1b30 a2=50 a3=7ffcc18d1c10 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.821000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.836585 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec 13 03:45:49.838449 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie0c8151690b: link becomes ready
Dec 13 03:45:49.840510 systemd-networkd[1020]: calie0c8151690b: Link UP
Dec 13 03:45:49.840716 systemd-networkd[1020]: calie0c8151690b: Gained carrier
Dec 13 03:45:49.849058 env[1255]: time="2024-12-13T03:45:49.849015330Z" level=info msg="StopPodSandbox for \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\""
Dec 13 03:45:49.871000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.871000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcc18d1a70 a2=28 a3=0 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.871000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.871000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.871000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcc18d1aa0 a2=28 a3=0 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.871000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.871000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.871000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcc18d19b0 a2=28 a3=0 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.871000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.871000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.871000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcc18d1ac0 a2=28 a3=0 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.871000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.871000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.871000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcc18d1aa0 a2=28 a3=0 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.871000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.871000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.871000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcc18d1a90 a2=28 a3=0 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.871000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.871000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.871000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcc18d1ac0 a2=28 a3=0 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.871000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.872000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.872000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcc18d1aa0 a2=28 a3=0 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.872000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.872000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.872000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcc18d1ac0 a2=28 a3=0 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.872000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.872000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.872000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcc18d1a90 a2=28 a3=0 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.872000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.872000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.872000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcc18d1b00 a2=28 a3=0 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.872000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.872000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.872000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcc18d18b0 a2=50 a3=1 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.872000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.872000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.872000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.872000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.872000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.872000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.872000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.872000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.872000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.872000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.872000 audit: BPF prog-id=14 op=LOAD
Dec 13 03:45:49.872000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcc18d18b0 a2=94 a3=5 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.872000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.873000 audit: BPF prog-id=14 op=UNLOAD
Dec 13 03:45:49.873000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.873000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcc18d1960 a2=50 a3=1 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.873000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.873000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.873000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffcc18d1a80 a2=4 a3=38 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.873000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.873000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.873000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.873000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.873000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.873000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.873000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.873000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.873000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.873000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.873000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.873000 audit[3540]: AVC avc:  denied  { confidentiality } for  pid=3540 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0
Dec 13 03:45:49.873000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcc18d1ad0 a2=94 a3=6 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.873000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { confidentiality } for  pid=3540 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0
Dec 13 03:45:49.875000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcc18d1280 a2=94 a3=83 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.875000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { perfmon } for  pid=3540 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { bpf } for  pid=3540 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:49.875000 audit[3540]: AVC avc:  denied  { confidentiality } for  pid=3540 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0
Dec 13 03:45:49.875000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcc18d1280 a2=94 a3=83 items=0 ppid=3401 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:49.875000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.665 [INFO][3545] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0 calico-apiserver-574c4c684d- calico-apiserver  35fca77a-f366-4df2-8058-b9331e4164b4 793 0 2024-12-13 03:45:16 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:574c4c684d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  ci-3510-3-6-b-896f86a818.novalocal  calico-apiserver-574c4c684d-2dhzt eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie0c8151690b  [] []}} ContainerID="9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" Namespace="calico-apiserver" Pod="calico-apiserver-574c4c684d-2dhzt" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-"
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.665 [INFO][3545] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" Namespace="calico-apiserver" Pod="calico-apiserver-574c4c684d-2dhzt" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0"
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.750 [INFO][3559] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" HandleID="k8s-pod-network.9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0"
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.767 [INFO][3559] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" HandleID="k8s-pod-network.9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318e20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-6-b-896f86a818.novalocal", "pod":"calico-apiserver-574c4c684d-2dhzt", "timestamp":"2024-12-13 03:45:49.750471641 +0000 UTC"}, Hostname:"ci-3510-3-6-b-896f86a818.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.767 [INFO][3559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.767 [INFO][3559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.767 [INFO][3559] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-b-896f86a818.novalocal'
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.773 [INFO][3559] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.783 [INFO][3559] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.790 [INFO][3559] ipam/ipam.go 489: Trying affinity for 192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.794 [INFO][3559] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.797 [INFO][3559] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.797 [INFO][3559] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.128/26 handle="k8s-pod-network.9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.799 [INFO][3559] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.804 [INFO][3559] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.128/26 handle="k8s-pod-network.9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.811 [INFO][3559] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.129/26] block=192.168.124.128/26 handle="k8s-pod-network.9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.811 [INFO][3559] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.129/26] handle="k8s-pod-network.9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.811 [INFO][3559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:49.918475 env[1255]: 2024-12-13 03:45:49.811 [INFO][3559] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.129/26] IPv6=[] ContainerID="9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" HandleID="k8s-pod-network.9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0"
Dec 13 03:45:49.919128 env[1255]: 2024-12-13 03:45:49.814 [INFO][3545] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" Namespace="calico-apiserver" Pod="calico-apiserver-574c4c684d-2dhzt" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0", GenerateName:"calico-apiserver-574c4c684d-", Namespace:"calico-apiserver", SelfLink:"", UID:"35fca77a-f366-4df2-8058-b9331e4164b4", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 16, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574c4c684d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"", Pod:"calico-apiserver-574c4c684d-2dhzt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0c8151690b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:49.919128 env[1255]: 2024-12-13 03:45:49.814 [INFO][3545] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.129/32] ContainerID="9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" Namespace="calico-apiserver" Pod="calico-apiserver-574c4c684d-2dhzt" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0"
Dec 13 03:45:49.919128 env[1255]: 2024-12-13 03:45:49.814 [INFO][3545] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0c8151690b ContainerID="9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" Namespace="calico-apiserver" Pod="calico-apiserver-574c4c684d-2dhzt" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0"
Dec 13 03:45:49.919128 env[1255]: 2024-12-13 03:45:49.847 [INFO][3545] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" Namespace="calico-apiserver" Pod="calico-apiserver-574c4c684d-2dhzt" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0"
Dec 13 03:45:49.919128 env[1255]: 2024-12-13 03:45:49.847 [INFO][3545] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" Namespace="calico-apiserver" Pod="calico-apiserver-574c4c684d-2dhzt" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0", GenerateName:"calico-apiserver-574c4c684d-", Namespace:"calico-apiserver", SelfLink:"", UID:"35fca77a-f366-4df2-8058-b9331e4164b4", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 16, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574c4c684d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f", Pod:"calico-apiserver-574c4c684d-2dhzt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0c8151690b", MAC:"72:d1:74:49:68:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:49.919128 env[1255]: 2024-12-13 03:45:49.906 [INFO][3545] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f" Namespace="calico-apiserver" Pod="calico-apiserver-574c4c684d-2dhzt" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0"
Dec 13 03:45:49.960143 env[1255]: time="2024-12-13T03:45:49.954894322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 03:45:49.960143 env[1255]: time="2024-12-13T03:45:49.954943064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 03:45:49.960143 env[1255]: time="2024-12-13T03:45:49.954963132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 03:45:49.960143 env[1255]: time="2024-12-13T03:45:49.955157336Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f pid=3606 runtime=io.containerd.runc.v2
Dec 13 03:45:50.013000 audit[3630]: AVC avc:  denied  { bpf } for  pid=3630 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.013000 audit[3630]: AVC avc:  denied  { bpf } for  pid=3630 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.013000 audit[3630]: AVC avc:  denied  { perfmon } for  pid=3630 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.013000 audit[3630]: AVC avc:  denied  { perfmon } for  pid=3630 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.013000 audit[3630]: AVC avc:  denied  { perfmon } for  pid=3630 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.013000 audit[3630]: AVC avc:  denied  { perfmon } for  pid=3630 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.013000 audit[3630]: AVC avc:  denied  { perfmon } for  pid=3630 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.013000 audit[3630]: AVC avc:  denied  { bpf } for  pid=3630 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.013000 audit[3630]: AVC avc:  denied  { bpf } for  pid=3630 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.013000 audit: BPF prog-id=15 op=LOAD
Dec 13 03:45:50.013000 audit[3630]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee276f9c0 a2=98 a3=1999999999999999 items=0 ppid=3401 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.013000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F
Dec 13 03:45:50.032000 audit: BPF prog-id=15 op=UNLOAD
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { bpf } for  pid=3630 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { bpf } for  pid=3630 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { perfmon } for  pid=3630 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { perfmon } for  pid=3630 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { perfmon } for  pid=3630 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { perfmon } for  pid=3630 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { perfmon } for  pid=3630 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { bpf } for  pid=3630 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { bpf } for  pid=3630 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit: BPF prog-id=16 op=LOAD
Dec 13 03:45:50.032000 audit[3630]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee276f8a0 a2=74 a3=ffff items=0 ppid=3401 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.032000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F
Dec 13 03:45:50.032000 audit: BPF prog-id=16 op=UNLOAD
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { bpf } for  pid=3630 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { bpf } for  pid=3630 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { perfmon } for  pid=3630 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { perfmon } for  pid=3630 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { perfmon } for  pid=3630 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { perfmon } for  pid=3630 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { perfmon } for  pid=3630 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { bpf } for  pid=3630 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit[3630]: AVC avc:  denied  { bpf } for  pid=3630 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.032000 audit: BPF prog-id=17 op=LOAD
Dec 13 03:45:50.032000 audit[3630]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee276f8e0 a2=40 a3=7ffee276fac0 items=0 ppid=3401 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.032000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F
Dec 13 03:45:50.032000 audit: BPF prog-id=17 op=UNLOAD
Dec 13 03:45:50.104556 env[1255]: time="2024-12-13T03:45:50.104412340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574c4c684d-2dhzt,Uid:35fca77a-f366-4df2-8058-b9331e4164b4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f\""
Dec 13 03:45:50.121919 env[1255]: time="2024-12-13T03:45:50.117163530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\""
Dec 13 03:45:50.175821 systemd-networkd[1020]: vxlan.calico: Link UP
Dec 13 03:45:50.175831 systemd-networkd[1020]: vxlan.calico: Gained carrier
Dec 13 03:45:50.180000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.180000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.180000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.180000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.180000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.180000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.180000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.180000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.180000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.180000 audit: BPF prog-id=18 op=LOAD
Dec 13 03:45:50.180000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd315eda80 a2=98 a3=100 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.180000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.180000 audit: BPF prog-id=18 op=UNLOAD
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit: BPF prog-id=19 op=LOAD
Dec 13 03:45:50.182000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd315ed890 a2=74 a3=540051 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.182000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.182000 audit: BPF prog-id=19 op=UNLOAD
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit: BPF prog-id=20 op=LOAD
Dec 13 03:45:50.182000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd315ed8c0 a2=94 a3=2 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.182000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.182000 audit: BPF prog-id=20 op=UNLOAD
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd315ed790 a2=28 a3=0 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.182000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd315ed7c0 a2=28 a3=0 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.182000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd315ed6d0 a2=28 a3=0 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.182000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd315ed7e0 a2=28 a3=0 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.182000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd315ed7c0 a2=28 a3=0 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.182000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd315ed7b0 a2=28 a3=0 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.182000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd315ed7e0 a2=28 a3=0 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.182000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd315ed7c0 a2=28 a3=0 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.182000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd315ed7e0 a2=28 a3=0 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.182000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd315ed7b0 a2=28 a3=0 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.182000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd315ed820 a2=28 a3=0 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.182000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.182000 audit: BPF prog-id=21 op=LOAD
Dec 13 03:45:50.182000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd315ed690 a2=40 a3=0 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.182000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.182000 audit: BPF prog-id=21 op=UNLOAD
Dec 13 03:45:50.183000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.183000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffd315ed680 a2=50 a3=2800 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.183000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.183000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffd315ed680 a2=50 a3=2800 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.183000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.183000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.183000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.183000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.183000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.183000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.183000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.183000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.183000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.183000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.183000 audit: BPF prog-id=22 op=LOAD
Dec 13 03:45:50.183000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd315ecea0 a2=94 a3=2 items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.184000 audit: BPF prog-id=22 op=UNLOAD
Dec 13 03:45:50.184000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.184000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.184000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.184000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.184000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.184000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.184000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.184000 audit[3675]: AVC avc:  denied  { perfmon } for  pid=3675 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.184000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.184000 audit[3675]: AVC avc:  denied  { bpf } for  pid=3675 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.184000 audit: BPF prog-id=23 op=LOAD
Dec 13 03:45:50.184000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd315ecfa0 a2=94 a3=2d items=0 ppid=3401 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.184000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit: BPF prog-id=24 op=LOAD
Dec 13 03:45:50.186000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffea805c6b0 a2=98 a3=0 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.186000 audit: BPF prog-id=24 op=UNLOAD
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit: BPF prog-id=25 op=LOAD
Dec 13 03:45:50.186000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffea805c490 a2=74 a3=540051 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.186000 audit: BPF prog-id=25 op=UNLOAD
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.186000 audit: BPF prog-id=26 op=LOAD
Dec 13 03:45:50.186000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffea805c4c0 a2=94 a3=2 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.186000 audit: BPF prog-id=26 op=UNLOAD
Dec 13 03:45:50.222545 env[1255]: 2024-12-13 03:45:50.008 [INFO][3586] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79"
Dec 13 03:45:50.222545 env[1255]: 2024-12-13 03:45:50.008 [INFO][3586] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" iface="eth0" netns="/var/run/netns/cni-304f4060-eb48-694c-13a5-ced808b7a2a4"
Dec 13 03:45:50.222545 env[1255]: 2024-12-13 03:45:50.009 [INFO][3586] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" iface="eth0" netns="/var/run/netns/cni-304f4060-eb48-694c-13a5-ced808b7a2a4"
Dec 13 03:45:50.222545 env[1255]: 2024-12-13 03:45:50.009 [INFO][3586] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" iface="eth0" netns="/var/run/netns/cni-304f4060-eb48-694c-13a5-ced808b7a2a4"
Dec 13 03:45:50.222545 env[1255]: 2024-12-13 03:45:50.009 [INFO][3586] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79"
Dec 13 03:45:50.222545 env[1255]: 2024-12-13 03:45:50.009 [INFO][3586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79"
Dec 13 03:45:50.222545 env[1255]: 2024-12-13 03:45:50.135 [INFO][3626] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" HandleID="k8s-pod-network.51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0"
Dec 13 03:45:50.222545 env[1255]: 2024-12-13 03:45:50.135 [INFO][3626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:50.222545 env[1255]: 2024-12-13 03:45:50.135 [INFO][3626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:50.222545 env[1255]: 2024-12-13 03:45:50.168 [WARNING][3626] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" HandleID="k8s-pod-network.51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0"
Dec 13 03:45:50.222545 env[1255]: 2024-12-13 03:45:50.168 [INFO][3626] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" HandleID="k8s-pod-network.51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0"
Dec 13 03:45:50.222545 env[1255]: 2024-12-13 03:45:50.170 [INFO][3626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:50.222545 env[1255]: 2024-12-13 03:45:50.213 [INFO][3586] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79"
Dec 13 03:45:50.223416 env[1255]: time="2024-12-13T03:45:50.223201612Z" level=info msg="TearDown network for sandbox \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\" successfully"
Dec 13 03:45:50.223416 env[1255]: time="2024-12-13T03:45:50.223237300Z" level=info msg="StopPodSandbox for \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\" returns successfully"
Dec 13 03:45:50.224161 env[1255]: time="2024-12-13T03:45:50.224086616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574c4c684d-8pf6t,Uid:fd36f021-0ca4-4c3d-84a7-4ab0c0604448,Namespace:calico-apiserver,Attempt:1,}"
Dec 13 03:45:50.323000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.323000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.323000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.323000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.323000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.323000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.323000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.323000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.323000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.323000 audit: BPF prog-id=27 op=LOAD
Dec 13 03:45:50.323000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffea805c380 a2=40 a3=1 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.323000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.324000 audit: BPF prog-id=27 op=UNLOAD
Dec 13 03:45:50.324000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.324000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffea805c450 a2=50 a3=7ffea805c530 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.324000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea805c390 a2=28 a3=0 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffea805c3c0 a2=28 a3=0 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffea805c2d0 a2=28 a3=0 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea805c3e0 a2=28 a3=0 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea805c3c0 a2=28 a3=0 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea805c3b0 a2=28 a3=0 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea805c3e0 a2=28 a3=0 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffea805c3c0 a2=28 a3=0 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffea805c3e0 a2=28 a3=0 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffea805c3b0 a2=28 a3=0 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea805c420 a2=28 a3=0 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffea805c1d0 a2=50 a3=1 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit: BPF prog-id=28 op=LOAD
Dec 13 03:45:50.338000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffea805c1d0 a2=94 a3=5 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.338000 audit: BPF prog-id=28 op=UNLOAD
Dec 13 03:45:50.338000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.338000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffea805c280 a2=50 a3=1 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffea805c3a0 a2=4 a3=38 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.339000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { confidentiality } for  pid=3679 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0
Dec 13 03:45:50.339000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffea805c3f0 a2=94 a3=6 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.339000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { confidentiality } for  pid=3679 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0
Dec 13 03:45:50.339000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffea805bba0 a2=94 a3=83 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.339000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { perfmon } for  pid=3679 comm="bpftool" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.339000 audit[3679]: AVC avc:  denied  { confidentiality } for  pid=3679 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0
Dec 13 03:45:50.339000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffea805bba0 a2=94 a3=83 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.339000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.340000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.340000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffea805d5e0 a2=10 a3=f0f1 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.340000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.340000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.340000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffea805d480 a2=10 a3=3 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.340000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.340000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.340000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffea805d420 a2=10 a3=3 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.340000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.340000 audit[3679]: AVC avc:  denied  { bpf } for  pid=3679 comm="bpftool" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Dec 13 03:45:50.340000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffea805d420 a2=10 a3=7 items=0 ppid=3401 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.340000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41
Dec 13 03:45:50.346000 audit: BPF prog-id=23 op=UNLOAD
Dec 13 03:45:50.460000 audit[3724]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=3724 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Dec 13 03:45:50.460000 audit[3724]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe199aefd0 a2=0 a3=7ffe199aefbc items=0 ppid=3401 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.464542 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie445b24f2ca: link becomes ready
Dec 13 03:45:50.460000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Dec 13 03:45:50.462671 systemd-networkd[1020]: calie445b24f2ca: Link UP
Dec 13 03:45:50.463715 systemd-networkd[1020]: calie445b24f2ca: Gained carrier
Dec 13 03:45:50.491000 audit[3722]: NETFILTER_CFG table=raw:98 family=2 entries=21 op=nft_register_chain pid=3722 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Dec 13 03:45:50.491000 audit[3722]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffde3e87ce0 a2=0 a3=7ffde3e87ccc items=0 ppid=3401 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.491000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Dec 13 03:45:50.505000 audit[3723]: NETFILTER_CFG table=nat:99 family=2 entries=15 op=nft_register_chain pid=3723 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Dec 13 03:45:50.505000 audit[3723]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffec27ea6f0 a2=0 a3=7ffec27ea6dc items=0 ppid=3401 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.505000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.331 [INFO][3683] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0 calico-apiserver-574c4c684d- calico-apiserver  fd36f021-0ca4-4c3d-84a7-4ab0c0604448 802 0 2024-12-13 03:45:16 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:574c4c684d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  ci-3510-3-6-b-896f86a818.novalocal  calico-apiserver-574c4c684d-8pf6t eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie445b24f2ca  [] []}} ContainerID="3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" Namespace="calico-apiserver" Pod="calico-apiserver-574c4c684d-8pf6t" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-"
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.332 [INFO][3683] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" Namespace="calico-apiserver" Pod="calico-apiserver-574c4c684d-8pf6t" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0"
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.399 [INFO][3699] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" HandleID="k8s-pod-network.3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0"
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.413 [INFO][3699] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" HandleID="k8s-pod-network.3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000311710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-6-b-896f86a818.novalocal", "pod":"calico-apiserver-574c4c684d-8pf6t", "timestamp":"2024-12-13 03:45:50.399264175 +0000 UTC"}, Hostname:"ci-3510-3-6-b-896f86a818.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.413 [INFO][3699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.413 [INFO][3699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.413 [INFO][3699] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-b-896f86a818.novalocal'
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.415 [INFO][3699] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.423 [INFO][3699] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.427 [INFO][3699] ipam/ipam.go 489: Trying affinity for 192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.430 [INFO][3699] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.433 [INFO][3699] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.433 [INFO][3699] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.128/26 handle="k8s-pod-network.3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.435 [INFO][3699] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.446 [INFO][3699] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.128/26 handle="k8s-pod-network.3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.454 [INFO][3699] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.130/26] block=192.168.124.128/26 handle="k8s-pod-network.3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.454 [INFO][3699] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.130/26] handle="k8s-pod-network.3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.454 [INFO][3699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:50.508426 env[1255]: 2024-12-13 03:45:50.454 [INFO][3699] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.130/26] IPv6=[] ContainerID="3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" HandleID="k8s-pod-network.3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0"
Dec 13 03:45:50.506000 audit[3725]: NETFILTER_CFG table=filter:100 family=2 entries=75 op=nft_register_chain pid=3725 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Dec 13 03:45:50.514144 env[1255]: 2024-12-13 03:45:50.457 [INFO][3683] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" Namespace="calico-apiserver" Pod="calico-apiserver-574c4c684d-8pf6t" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0", GenerateName:"calico-apiserver-574c4c684d-", Namespace:"calico-apiserver", SelfLink:"", UID:"fd36f021-0ca4-4c3d-84a7-4ab0c0604448", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 16, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574c4c684d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"", Pod:"calico-apiserver-574c4c684d-8pf6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie445b24f2ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:50.514144 env[1255]: 2024-12-13 03:45:50.457 [INFO][3683] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.130/32] ContainerID="3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" Namespace="calico-apiserver" Pod="calico-apiserver-574c4c684d-8pf6t" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0"
Dec 13 03:45:50.514144 env[1255]: 2024-12-13 03:45:50.457 [INFO][3683] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie445b24f2ca ContainerID="3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" Namespace="calico-apiserver" Pod="calico-apiserver-574c4c684d-8pf6t" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0"
Dec 13 03:45:50.514144 env[1255]: 2024-12-13 03:45:50.461 [INFO][3683] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" Namespace="calico-apiserver" Pod="calico-apiserver-574c4c684d-8pf6t" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0"
Dec 13 03:45:50.514144 env[1255]: 2024-12-13 03:45:50.466 [INFO][3683] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" Namespace="calico-apiserver" Pod="calico-apiserver-574c4c684d-8pf6t" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0", GenerateName:"calico-apiserver-574c4c684d-", Namespace:"calico-apiserver", SelfLink:"", UID:"fd36f021-0ca4-4c3d-84a7-4ab0c0604448", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 16, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574c4c684d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02", Pod:"calico-apiserver-574c4c684d-8pf6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie445b24f2ca", MAC:"ce:d0:d4:ea:e1:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:50.514144 env[1255]: 2024-12-13 03:45:50.495 [INFO][3683] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02" Namespace="calico-apiserver" Pod="calico-apiserver-574c4c684d-8pf6t" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0"
Dec 13 03:45:50.506000 audit[3725]: SYSCALL arch=c000003e syscall=46 success=yes exit=40748 a0=3 a1=7ffc2e428020 a2=0 a3=0 items=0 ppid=3401 pid=3725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.506000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Dec 13 03:45:50.514625 systemd[1]: run-netns-cni\x2d304f4060\x2deb48\x2d694c\x2d13a5\x2dced808b7a2a4.mount: Deactivated successfully.
Dec 13 03:45:50.540835 env[1255]: time="2024-12-13T03:45:50.540723337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 03:45:50.540835 env[1255]: time="2024-12-13T03:45:50.540784952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 03:45:50.540835 env[1255]: time="2024-12-13T03:45:50.540800902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 03:45:50.541564 env[1255]: time="2024-12-13T03:45:50.541472534Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02 pid=3752 runtime=io.containerd.runc.v2
Dec 13 03:45:50.547000 audit[3757]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=3757 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Dec 13 03:45:50.547000 audit[3757]: SYSCALL arch=c000003e syscall=46 success=yes exit=20328 a0=3 a1=7ffe64d98cf0 a2=0 a3=7ffe64d98cdc items=0 ppid=3401 pid=3757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:50.547000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Dec 13 03:45:50.573854 systemd[1]: run-containerd-runc-k8s.io-3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02-runc.BLj9CL.mount: Deactivated successfully.
Dec 13 03:45:50.628437 env[1255]: time="2024-12-13T03:45:50.628377755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574c4c684d-8pf6t,Uid:fd36f021-0ca4-4c3d-84a7-4ab0c0604448,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02\""
Dec 13 03:45:50.837033 env[1255]: time="2024-12-13T03:45:50.836903268Z" level=info msg="StopPodSandbox for \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\""
Dec 13 03:45:50.840996 env[1255]: time="2024-12-13T03:45:50.840922390Z" level=info msg="StopPodSandbox for \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\""
Dec 13 03:45:51.008884 env[1255]: 2024-12-13 03:45:50.935 [INFO][3805] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf"
Dec 13 03:45:51.008884 env[1255]: 2024-12-13 03:45:50.935 [INFO][3805] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" iface="eth0" netns="/var/run/netns/cni-6970e081-80b4-e80f-0000-dc0a39c00e50"
Dec 13 03:45:51.008884 env[1255]: 2024-12-13 03:45:50.936 [INFO][3805] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" iface="eth0" netns="/var/run/netns/cni-6970e081-80b4-e80f-0000-dc0a39c00e50"
Dec 13 03:45:51.008884 env[1255]: 2024-12-13 03:45:50.936 [INFO][3805] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" iface="eth0" netns="/var/run/netns/cni-6970e081-80b4-e80f-0000-dc0a39c00e50"
Dec 13 03:45:51.008884 env[1255]: 2024-12-13 03:45:50.936 [INFO][3805] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf"
Dec 13 03:45:51.008884 env[1255]: 2024-12-13 03:45:50.936 [INFO][3805] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf"
Dec 13 03:45:51.008884 env[1255]: 2024-12-13 03:45:50.987 [INFO][3823] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" HandleID="k8s-pod-network.2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0"
Dec 13 03:45:51.008884 env[1255]: 2024-12-13 03:45:50.987 [INFO][3823] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:51.008884 env[1255]: 2024-12-13 03:45:50.987 [INFO][3823] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:51.008884 env[1255]: 2024-12-13 03:45:50.994 [WARNING][3823] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" HandleID="k8s-pod-network.2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0"
Dec 13 03:45:51.008884 env[1255]: 2024-12-13 03:45:50.994 [INFO][3823] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" HandleID="k8s-pod-network.2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0"
Dec 13 03:45:51.008884 env[1255]: 2024-12-13 03:45:51.002 [INFO][3823] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:51.008884 env[1255]: 2024-12-13 03:45:51.004 [INFO][3805] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf"
Dec 13 03:45:51.015925 systemd[1]: run-netns-cni\x2d6970e081\x2d80b4\x2de80f\x2d0000\x2ddc0a39c00e50.mount: Deactivated successfully.
Dec 13 03:45:51.021102 env[1255]: time="2024-12-13T03:45:51.021062223Z" level=info msg="TearDown network for sandbox \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\" successfully"
Dec 13 03:45:51.021302 env[1255]: time="2024-12-13T03:45:51.021257250Z" level=info msg="StopPodSandbox for \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\" returns successfully"
Dec 13 03:45:51.022948 env[1255]: time="2024-12-13T03:45:51.022620681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59bc466bbc-4dbd6,Uid:bc3d7731-763a-422a-9334-38f54b5935d1,Namespace:calico-system,Attempt:1,}"
Dec 13 03:45:51.081033 env[1255]: 2024-12-13 03:45:51.009 [INFO][3818] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457"
Dec 13 03:45:51.081033 env[1255]: 2024-12-13 03:45:51.010 [INFO][3818] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" iface="eth0" netns="/var/run/netns/cni-e1f44e4c-6ad6-194b-17d5-f89575529bef"
Dec 13 03:45:51.081033 env[1255]: 2024-12-13 03:45:51.010 [INFO][3818] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" iface="eth0" netns="/var/run/netns/cni-e1f44e4c-6ad6-194b-17d5-f89575529bef"
Dec 13 03:45:51.081033 env[1255]: 2024-12-13 03:45:51.010 [INFO][3818] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" iface="eth0" netns="/var/run/netns/cni-e1f44e4c-6ad6-194b-17d5-f89575529bef"
Dec 13 03:45:51.081033 env[1255]: 2024-12-13 03:45:51.010 [INFO][3818] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457"
Dec 13 03:45:51.081033 env[1255]: 2024-12-13 03:45:51.010 [INFO][3818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457"
Dec 13 03:45:51.081033 env[1255]: 2024-12-13 03:45:51.060 [INFO][3831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" HandleID="k8s-pod-network.38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0"
Dec 13 03:45:51.081033 env[1255]: 2024-12-13 03:45:51.060 [INFO][3831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:51.081033 env[1255]: 2024-12-13 03:45:51.060 [INFO][3831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:51.081033 env[1255]: 2024-12-13 03:45:51.074 [WARNING][3831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" HandleID="k8s-pod-network.38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0"
Dec 13 03:45:51.081033 env[1255]: 2024-12-13 03:45:51.074 [INFO][3831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" HandleID="k8s-pod-network.38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0"
Dec 13 03:45:51.081033 env[1255]: 2024-12-13 03:45:51.077 [INFO][3831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:51.081033 env[1255]: 2024-12-13 03:45:51.079 [INFO][3818] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457"
Dec 13 03:45:51.082012 env[1255]: time="2024-12-13T03:45:51.081943828Z" level=info msg="TearDown network for sandbox \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\" successfully"
Dec 13 03:45:51.082094 env[1255]: time="2024-12-13T03:45:51.082073252Z" level=info msg="StopPodSandbox for \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\" returns successfully"
Dec 13 03:45:51.082837 env[1255]: time="2024-12-13T03:45:51.082811538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v5c5r,Uid:e67c0431-ca4e-483a-b78f-aa6377b70035,Namespace:calico-system,Attempt:1,}"
Dec 13 03:45:51.256186 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec 13 03:45:51.256703 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali32f239bdae6: link becomes ready
Dec 13 03:45:51.257649 systemd-networkd[1020]: cali32f239bdae6: Link UP
Dec 13 03:45:51.257874 systemd-networkd[1020]: cali32f239bdae6: Gained carrier
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.118 [INFO][3837] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0 calico-kube-controllers-59bc466bbc- calico-system  bc3d7731-763a-422a-9334-38f54b5935d1 813 0 2024-12-13 03:45:17 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59bc466bbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s  ci-3510-3-6-b-896f86a818.novalocal  calico-kube-controllers-59bc466bbc-4dbd6 eth0 calico-kube-controllers [] []   [kns.calico-system ksa.calico-system.calico-kube-controllers] cali32f239bdae6  [] []}} ContainerID="d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" Namespace="calico-system" Pod="calico-kube-controllers-59bc466bbc-4dbd6" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-"
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.119 [INFO][3837] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" Namespace="calico-system" Pod="calico-kube-controllers-59bc466bbc-4dbd6" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0"
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.174 [INFO][3859] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" HandleID="k8s-pod-network.d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0"
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.198 [INFO][3859] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" HandleID="k8s-pod-network.d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-6-b-896f86a818.novalocal", "pod":"calico-kube-controllers-59bc466bbc-4dbd6", "timestamp":"2024-12-13 03:45:51.174561075 +0000 UTC"}, Hostname:"ci-3510-3-6-b-896f86a818.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.198 [INFO][3859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.198 [INFO][3859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.198 [INFO][3859] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-b-896f86a818.novalocal'
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.210 [INFO][3859] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.217 [INFO][3859] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.226 [INFO][3859] ipam/ipam.go 489: Trying affinity for 192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.228 [INFO][3859] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.231 [INFO][3859] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.231 [INFO][3859] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.128/26 handle="k8s-pod-network.d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.232 [INFO][3859] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.237 [INFO][3859] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.128/26 handle="k8s-pod-network.d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.245 [INFO][3859] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.131/26] block=192.168.124.128/26 handle="k8s-pod-network.d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.245 [INFO][3859] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.131/26] handle="k8s-pod-network.d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.245 [INFO][3859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:51.280666 env[1255]: 2024-12-13 03:45:51.245 [INFO][3859] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.131/26] IPv6=[] ContainerID="d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" HandleID="k8s-pod-network.d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0"
Dec 13 03:45:51.283357 env[1255]: 2024-12-13 03:45:51.247 [INFO][3837] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" Namespace="calico-system" Pod="calico-kube-controllers-59bc466bbc-4dbd6" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0", GenerateName:"calico-kube-controllers-59bc466bbc-", Namespace:"calico-system", SelfLink:"", UID:"bc3d7731-763a-422a-9334-38f54b5935d1", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59bc466bbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"", Pod:"calico-kube-controllers-59bc466bbc-4dbd6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali32f239bdae6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:51.283357 env[1255]: 2024-12-13 03:45:51.248 [INFO][3837] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.131/32] ContainerID="d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" Namespace="calico-system" Pod="calico-kube-controllers-59bc466bbc-4dbd6" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0"
Dec 13 03:45:51.283357 env[1255]: 2024-12-13 03:45:51.248 [INFO][3837] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32f239bdae6 ContainerID="d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" Namespace="calico-system" Pod="calico-kube-controllers-59bc466bbc-4dbd6" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0"
Dec 13 03:45:51.283357 env[1255]: 2024-12-13 03:45:51.260 [INFO][3837] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" Namespace="calico-system" Pod="calico-kube-controllers-59bc466bbc-4dbd6" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0"
Dec 13 03:45:51.283357 env[1255]: 2024-12-13 03:45:51.260 [INFO][3837] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" Namespace="calico-system" Pod="calico-kube-controllers-59bc466bbc-4dbd6" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0", GenerateName:"calico-kube-controllers-59bc466bbc-", Namespace:"calico-system", SelfLink:"", UID:"bc3d7731-763a-422a-9334-38f54b5935d1", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59bc466bbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982", Pod:"calico-kube-controllers-59bc466bbc-4dbd6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali32f239bdae6", MAC:"4a:60:e4:b7:50:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:51.283357 env[1255]: 2024-12-13 03:45:51.278 [INFO][3837] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982" Namespace="calico-system" Pod="calico-kube-controllers-59bc466bbc-4dbd6" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0"
Dec 13 03:45:51.312026 env[1255]: time="2024-12-13T03:45:51.311942773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 03:45:51.312192 env[1255]: time="2024-12-13T03:45:51.312045326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 03:45:51.312192 env[1255]: time="2024-12-13T03:45:51.312078829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 03:45:51.314493 env[1255]: time="2024-12-13T03:45:51.314402976Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982 pid=3894 runtime=io.containerd.runc.v2
Dec 13 03:45:51.315000 audit[3900]: NETFILTER_CFG table=filter:102 family=2 entries=48 op=nft_register_chain pid=3900 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Dec 13 03:45:51.315000 audit[3900]: SYSCALL arch=c000003e syscall=46 success=yes exit=24376 a0=3 a1=7ffd2e82b4c0 a2=0 a3=7ffd2e82b4ac items=0 ppid=3401 pid=3900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:51.315000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Dec 13 03:45:51.335940 systemd-networkd[1020]: calibde4ba64b7e: Link UP
Dec 13 03:45:51.337710 systemd-networkd[1020]: calibde4ba64b7e: Gained carrier
Dec 13 03:45:51.338406 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibde4ba64b7e: link becomes ready
Dec 13 03:45:51.358280 systemd-networkd[1020]: calie0c8151690b: Gained IPv6LL
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.187 [INFO][3851] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0 csi-node-driver- calico-system  e67c0431-ca4e-483a-b78f-aa6377b70035 814 0 2024-12-13 03:45:17 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s  ci-3510-3-6-b-896f86a818.novalocal  csi-node-driver-v5c5r eth0 csi-node-driver [] []   [kns.calico-system ksa.calico-system.csi-node-driver] calibde4ba64b7e  [] []}} ContainerID="009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" Namespace="calico-system" Pod="csi-node-driver-v5c5r" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-"
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.187 [INFO][3851] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" Namespace="calico-system" Pod="csi-node-driver-v5c5r" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0"
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.265 [INFO][3869] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" HandleID="k8s-pod-network.009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0"
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.286 [INFO][3869] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" HandleID="k8s-pod-network.009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003107d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-6-b-896f86a818.novalocal", "pod":"csi-node-driver-v5c5r", "timestamp":"2024-12-13 03:45:51.265882667 +0000 UTC"}, Hostname:"ci-3510-3-6-b-896f86a818.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.286 [INFO][3869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.286 [INFO][3869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.286 [INFO][3869] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-b-896f86a818.novalocal'
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.288 [INFO][3869] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.293 [INFO][3869] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.301 [INFO][3869] ipam/ipam.go 489: Trying affinity for 192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.303 [INFO][3869] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.306 [INFO][3869] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.306 [INFO][3869] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.128/26 handle="k8s-pod-network.009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.308 [INFO][3869] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.323 [INFO][3869] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.128/26 handle="k8s-pod-network.009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.330 [INFO][3869] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.132/26] block=192.168.124.128/26 handle="k8s-pod-network.009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.330 [INFO][3869] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.132/26] handle="k8s-pod-network.009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.330 [INFO][3869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:51.377385 env[1255]: 2024-12-13 03:45:51.330 [INFO][3869] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.132/26] IPv6=[] ContainerID="009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" HandleID="k8s-pod-network.009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0"
Dec 13 03:45:51.378103 env[1255]: 2024-12-13 03:45:51.332 [INFO][3851] cni-plugin/k8s.go 386: Populated endpoint ContainerID="009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" Namespace="calico-system" Pod="csi-node-driver-v5c5r" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e67c0431-ca4e-483a-b78f-aa6377b70035", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"", Pod:"csi-node-driver-v5c5r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibde4ba64b7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:51.378103 env[1255]: 2024-12-13 03:45:51.332 [INFO][3851] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.132/32] ContainerID="009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" Namespace="calico-system" Pod="csi-node-driver-v5c5r" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0"
Dec 13 03:45:51.378103 env[1255]: 2024-12-13 03:45:51.332 [INFO][3851] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibde4ba64b7e ContainerID="009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" Namespace="calico-system" Pod="csi-node-driver-v5c5r" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0"
Dec 13 03:45:51.378103 env[1255]: 2024-12-13 03:45:51.338 [INFO][3851] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" Namespace="calico-system" Pod="csi-node-driver-v5c5r" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0"
Dec 13 03:45:51.378103 env[1255]: 2024-12-13 03:45:51.338 [INFO][3851] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" Namespace="calico-system" Pod="csi-node-driver-v5c5r" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e67c0431-ca4e-483a-b78f-aa6377b70035", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6", Pod:"csi-node-driver-v5c5r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibde4ba64b7e", MAC:"ba:f9:63:52:8a:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:51.378103 env[1255]: 2024-12-13 03:45:51.375 [INFO][3851] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6" Namespace="calico-system" Pod="csi-node-driver-v5c5r" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0"
Dec 13 03:45:51.379000 audit[3929]: NETFILTER_CFG table=filter:103 family=2 entries=38 op=nft_register_chain pid=3929 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Dec 13 03:45:51.379000 audit[3929]: SYSCALL arch=c000003e syscall=46 success=yes exit=19812 a0=3 a1=7ffdaf63b2a0 a2=0 a3=7ffdaf63b28c items=0 ppid=3401 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:51.379000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Dec 13 03:45:51.411137 env[1255]: time="2024-12-13T03:45:51.411073317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 03:45:51.412715 env[1255]: time="2024-12-13T03:45:51.411168607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 03:45:51.412715 env[1255]: time="2024-12-13T03:45:51.411203152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 03:45:51.413599 env[1255]: time="2024-12-13T03:45:51.413459341Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6 pid=3946 runtime=io.containerd.runc.v2
Dec 13 03:45:51.432320 env[1255]: time="2024-12-13T03:45:51.432251791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59bc466bbc-4dbd6,Uid:bc3d7731-763a-422a-9334-38f54b5935d1,Namespace:calico-system,Attempt:1,} returns sandbox id \"d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982\""
Dec 13 03:45:51.466113 env[1255]: time="2024-12-13T03:45:51.466076047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v5c5r,Uid:e67c0431-ca4e-483a-b78f-aa6377b70035,Namespace:calico-system,Attempt:1,} returns sandbox id \"009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6\""
Dec 13 03:45:51.520073 systemd[1]: run-netns-cni\x2de1f44e4c\x2d6ad6\x2d194b\x2d17d5\x2df89575529bef.mount: Deactivated successfully.
Dec 13 03:45:51.537369 systemd-networkd[1020]: vxlan.calico: Gained IPv6LL
Dec 13 03:45:51.839875 env[1255]: time="2024-12-13T03:45:51.839700141Z" level=info msg="StopPodSandbox for \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\""
Dec 13 03:45:51.976033 env[1255]: 2024-12-13 03:45:51.923 [INFO][4002] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b"
Dec 13 03:45:51.976033 env[1255]: 2024-12-13 03:45:51.923 [INFO][4002] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" iface="eth0" netns="/var/run/netns/cni-4b741638-81e4-b25f-9d72-c0d1416f316b"
Dec 13 03:45:51.976033 env[1255]: 2024-12-13 03:45:51.924 [INFO][4002] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" iface="eth0" netns="/var/run/netns/cni-4b741638-81e4-b25f-9d72-c0d1416f316b"
Dec 13 03:45:51.976033 env[1255]: 2024-12-13 03:45:51.925 [INFO][4002] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" iface="eth0" netns="/var/run/netns/cni-4b741638-81e4-b25f-9d72-c0d1416f316b"
Dec 13 03:45:51.976033 env[1255]: 2024-12-13 03:45:51.925 [INFO][4002] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b"
Dec 13 03:45:51.976033 env[1255]: 2024-12-13 03:45:51.925 [INFO][4002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b"
Dec 13 03:45:51.976033 env[1255]: 2024-12-13 03:45:51.956 [INFO][4009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" HandleID="k8s-pod-network.7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0"
Dec 13 03:45:51.976033 env[1255]: 2024-12-13 03:45:51.956 [INFO][4009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:51.976033 env[1255]: 2024-12-13 03:45:51.956 [INFO][4009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:51.976033 env[1255]: 2024-12-13 03:45:51.970 [WARNING][4009] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" HandleID="k8s-pod-network.7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0"
Dec 13 03:45:51.976033 env[1255]: 2024-12-13 03:45:51.971 [INFO][4009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" HandleID="k8s-pod-network.7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0"
Dec 13 03:45:51.976033 env[1255]: 2024-12-13 03:45:51.973 [INFO][4009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:51.976033 env[1255]: 2024-12-13 03:45:51.974 [INFO][4002] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b"
Dec 13 03:45:51.980089 env[1255]: time="2024-12-13T03:45:51.979455188Z" level=info msg="TearDown network for sandbox \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\" successfully"
Dec 13 03:45:51.980089 env[1255]: time="2024-12-13T03:45:51.979501325Z" level=info msg="StopPodSandbox for \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\" returns successfully"
Dec 13 03:45:51.979799 systemd[1]: run-netns-cni\x2d4b741638\x2d81e4\x2db25f\x2d9d72\x2dc0d1416f316b.mount: Deactivated successfully.
Dec 13 03:45:51.981280 env[1255]: time="2024-12-13T03:45:51.981254580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xpw7r,Uid:c5bfdb85-3d00-442f-80af-ff6cf6916b5c,Namespace:kube-system,Attempt:1,}"
Dec 13 03:45:51.984814 systemd-networkd[1020]: calie445b24f2ca: Gained IPv6LL
Dec 13 03:45:52.152555 systemd-networkd[1020]: calia73719774c9: Link UP
Dec 13 03:45:52.153579 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia73719774c9: link becomes ready
Dec 13 03:45:52.155164 systemd-networkd[1020]: calia73719774c9: Gained carrier
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.053 [INFO][4015] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0 coredns-76f75df574- kube-system  c5bfdb85-3d00-442f-80af-ff6cf6916b5c 824 0 2024-12-13 03:45:07 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  ci-3510-3-6-b-896f86a818.novalocal  coredns-76f75df574-xpw7r eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] calia73719774c9  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" Namespace="kube-system" Pod="coredns-76f75df574-xpw7r" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-"
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.053 [INFO][4015] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" Namespace="kube-system" Pod="coredns-76f75df574-xpw7r" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0"
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.099 [INFO][4027] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" HandleID="k8s-pod-network.e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0"
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.111 [INFO][4027] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" HandleID="k8s-pod-network.e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00029f6d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-6-b-896f86a818.novalocal", "pod":"coredns-76f75df574-xpw7r", "timestamp":"2024-12-13 03:45:52.099079593 +0000 UTC"}, Hostname:"ci-3510-3-6-b-896f86a818.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.111 [INFO][4027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.111 [INFO][4027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.111 [INFO][4027] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-b-896f86a818.novalocal'
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.113 [INFO][4027] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.117 [INFO][4027] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.121 [INFO][4027] ipam/ipam.go 489: Trying affinity for 192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.123 [INFO][4027] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.126 [INFO][4027] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.126 [INFO][4027] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.128/26 handle="k8s-pod-network.e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.127 [INFO][4027] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.135 [INFO][4027] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.128/26 handle="k8s-pod-network.e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.144 [INFO][4027] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.133/26] block=192.168.124.128/26 handle="k8s-pod-network.e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.144 [INFO][4027] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.133/26] handle="k8s-pod-network.e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.144 [INFO][4027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:52.180437 env[1255]: 2024-12-13 03:45:52.144 [INFO][4027] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.133/26] IPv6=[] ContainerID="e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" HandleID="k8s-pod-network.e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0"
Dec 13 03:45:52.181843 env[1255]: 2024-12-13 03:45:52.147 [INFO][4015] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" Namespace="kube-system" Pod="coredns-76f75df574-xpw7r" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c5bfdb85-3d00-442f-80af-ff6cf6916b5c", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 7, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"", Pod:"coredns-76f75df574-xpw7r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia73719774c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:52.181843 env[1255]: 2024-12-13 03:45:52.147 [INFO][4015] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.133/32] ContainerID="e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" Namespace="kube-system" Pod="coredns-76f75df574-xpw7r" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0"
Dec 13 03:45:52.181843 env[1255]: 2024-12-13 03:45:52.147 [INFO][4015] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia73719774c9 ContainerID="e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" Namespace="kube-system" Pod="coredns-76f75df574-xpw7r" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0"
Dec 13 03:45:52.181843 env[1255]: 2024-12-13 03:45:52.162 [INFO][4015] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" Namespace="kube-system" Pod="coredns-76f75df574-xpw7r" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0"
Dec 13 03:45:52.181843 env[1255]: 2024-12-13 03:45:52.162 [INFO][4015] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" Namespace="kube-system" Pod="coredns-76f75df574-xpw7r" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c5bfdb85-3d00-442f-80af-ff6cf6916b5c", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 7, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722", Pod:"coredns-76f75df574-xpw7r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia73719774c9", MAC:"d6:70:3f:1c:23:76", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:52.181843 env[1255]: 2024-12-13 03:45:52.177 [INFO][4015] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722" Namespace="kube-system" Pod="coredns-76f75df574-xpw7r" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0"
Dec 13 03:45:52.197000 audit[4043]: NETFILTER_CFG table=filter:104 family=2 entries=46 op=nft_register_chain pid=4043 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Dec 13 03:45:52.197000 audit[4043]: SYSCALL arch=c000003e syscall=46 success=yes exit=22696 a0=3 a1=7fffa946dc10 a2=0 a3=7fffa946dbfc items=0 ppid=3401 pid=4043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:52.197000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Dec 13 03:45:52.212030 env[1255]: time="2024-12-13T03:45:52.211968432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 03:45:52.212152 env[1255]: time="2024-12-13T03:45:52.212038353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 03:45:52.212152 env[1255]: time="2024-12-13T03:45:52.212054653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 03:45:52.212227 env[1255]: time="2024-12-13T03:45:52.212174839Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722 pid=4056 runtime=io.containerd.runc.v2
Dec 13 03:45:52.294955 env[1255]: time="2024-12-13T03:45:52.294905953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xpw7r,Uid:c5bfdb85-3d00-442f-80af-ff6cf6916b5c,Namespace:kube-system,Attempt:1,} returns sandbox id \"e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722\""
Dec 13 03:45:52.299551 env[1255]: time="2024-12-13T03:45:52.299500876Z" level=info msg="CreateContainer within sandbox \"e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Dec 13 03:45:52.381966 systemd-networkd[1020]: cali32f239bdae6: Gained IPv6LL
Dec 13 03:45:52.433512 systemd-networkd[1020]: calibde4ba64b7e: Gained IPv6LL
Dec 13 03:45:52.682771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744764108.mount: Deactivated successfully.
Dec 13 03:45:52.697044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1562007055.mount: Deactivated successfully.
Dec 13 03:45:52.837207 env[1255]: time="2024-12-13T03:45:52.837117659Z" level=info msg="StopPodSandbox for \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\""
Dec 13 03:45:53.452951 env[1255]: 2024-12-13 03:45:53.365 [INFO][4106] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc"
Dec 13 03:45:53.452951 env[1255]: 2024-12-13 03:45:53.365 [INFO][4106] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" iface="eth0" netns="/var/run/netns/cni-259b9652-4b46-39e7-21f7-7efcccf731c9"
Dec 13 03:45:53.452951 env[1255]: 2024-12-13 03:45:53.365 [INFO][4106] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" iface="eth0" netns="/var/run/netns/cni-259b9652-4b46-39e7-21f7-7efcccf731c9"
Dec 13 03:45:53.452951 env[1255]: 2024-12-13 03:45:53.366 [INFO][4106] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" iface="eth0" netns="/var/run/netns/cni-259b9652-4b46-39e7-21f7-7efcccf731c9"
Dec 13 03:45:53.452951 env[1255]: 2024-12-13 03:45:53.366 [INFO][4106] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc"
Dec 13 03:45:53.452951 env[1255]: 2024-12-13 03:45:53.366 [INFO][4106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc"
Dec 13 03:45:53.452951 env[1255]: 2024-12-13 03:45:53.426 [INFO][4113] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" HandleID="k8s-pod-network.783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0"
Dec 13 03:45:53.452951 env[1255]: 2024-12-13 03:45:53.427 [INFO][4113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:53.452951 env[1255]: 2024-12-13 03:45:53.427 [INFO][4113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:53.452951 env[1255]: 2024-12-13 03:45:53.443 [WARNING][4113] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" HandleID="k8s-pod-network.783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0"
Dec 13 03:45:53.452951 env[1255]: 2024-12-13 03:45:53.443 [INFO][4113] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" HandleID="k8s-pod-network.783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0"
Dec 13 03:45:53.452951 env[1255]: 2024-12-13 03:45:53.447 [INFO][4113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:53.452951 env[1255]: 2024-12-13 03:45:53.450 [INFO][4106] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc"
Dec 13 03:45:53.455277 env[1255]: time="2024-12-13T03:45:53.455183153Z" level=info msg="TearDown network for sandbox \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\" successfully"
Dec 13 03:45:53.455572 env[1255]: time="2024-12-13T03:45:53.455490300Z" level=info msg="StopPodSandbox for \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\" returns successfully"
Dec 13 03:45:53.457629 env[1255]: time="2024-12-13T03:45:53.457571830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hv9jf,Uid:2f24277a-80c8-4080-b170-650a6deabb6f,Namespace:kube-system,Attempt:1,}"
Dec 13 03:45:53.518440 systemd[1]: run-netns-cni\x2d259b9652\x2d4b46\x2d39e7\x2d21f7\x2d7efcccf731c9.mount: Deactivated successfully.
Dec 13 03:45:53.585245 systemd-networkd[1020]: calia73719774c9: Gained IPv6LL
Dec 13 03:45:53.752759 env[1255]: time="2024-12-13T03:45:53.752432985Z" level=info msg="CreateContainer within sandbox \"e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0e50b82c84cd11f127d261b496a9dc245c832c892004c03918759025294fbd51\""
Dec 13 03:45:53.756007 env[1255]: time="2024-12-13T03:45:53.755940926Z" level=info msg="StartContainer for \"0e50b82c84cd11f127d261b496a9dc245c832c892004c03918759025294fbd51\""
Dec 13 03:45:53.883914 env[1255]: time="2024-12-13T03:45:53.883879982Z" level=info msg="StartContainer for \"0e50b82c84cd11f127d261b496a9dc245c832c892004c03918759025294fbd51\" returns successfully"
Dec 13 03:45:54.046309 systemd-networkd[1020]: cali8ef70ab4150: Link UP
Dec 13 03:45:54.049770 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec 13 03:45:54.049884 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8ef70ab4150: link becomes ready
Dec 13 03:45:54.050217 systemd-networkd[1020]: cali8ef70ab4150: Gained carrier
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:53.944 [INFO][4151] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0 coredns-76f75df574- kube-system  2f24277a-80c8-4080-b170-650a6deabb6f 835 0 2024-12-13 03:45:07 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  ci-3510-3-6-b-896f86a818.novalocal  coredns-76f75df574-hv9jf eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] cali8ef70ab4150  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" Namespace="kube-system" Pod="coredns-76f75df574-hv9jf" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-"
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:53.944 [INFO][4151] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" Namespace="kube-system" Pod="coredns-76f75df574-hv9jf" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0"
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:53.987 [INFO][4161] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" HandleID="k8s-pod-network.d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0"
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:53.998 [INFO][4161] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" HandleID="k8s-pod-network.d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003aac80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-6-b-896f86a818.novalocal", "pod":"coredns-76f75df574-hv9jf", "timestamp":"2024-12-13 03:45:53.987291424 +0000 UTC"}, Hostname:"ci-3510-3-6-b-896f86a818.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:53.998 [INFO][4161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:53.998 [INFO][4161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:53.998 [INFO][4161] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-b-896f86a818.novalocal'
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:54.000 [INFO][4161] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:54.005 [INFO][4161] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:54.011 [INFO][4161] ipam/ipam.go 489: Trying affinity for 192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:54.013 [INFO][4161] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:54.016 [INFO][4161] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.128/26 host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:54.016 [INFO][4161] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.128/26 handle="k8s-pod-network.d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:54.018 [INFO][4161] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:54.024 [INFO][4161] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.128/26 handle="k8s-pod-network.d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:54.035 [INFO][4161] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.134/26] block=192.168.124.128/26 handle="k8s-pod-network.d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:54.035 [INFO][4161] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.134/26] handle="k8s-pod-network.d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" host="ci-3510-3-6-b-896f86a818.novalocal"
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:54.035 [INFO][4161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:54.068777 env[1255]: 2024-12-13 03:45:54.035 [INFO][4161] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.134/26] IPv6=[] ContainerID="d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" HandleID="k8s-pod-network.d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0"
Dec 13 03:45:54.069767 env[1255]: 2024-12-13 03:45:54.039 [INFO][4151] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" Namespace="kube-system" Pod="coredns-76f75df574-hv9jf" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2f24277a-80c8-4080-b170-650a6deabb6f", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 7, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"", Pod:"coredns-76f75df574-hv9jf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ef70ab4150", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:54.069767 env[1255]: 2024-12-13 03:45:54.040 [INFO][4151] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.134/32] ContainerID="d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" Namespace="kube-system" Pod="coredns-76f75df574-hv9jf" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0"
Dec 13 03:45:54.069767 env[1255]: 2024-12-13 03:45:54.040 [INFO][4151] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ef70ab4150 ContainerID="d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" Namespace="kube-system" Pod="coredns-76f75df574-hv9jf" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0"
Dec 13 03:45:54.069767 env[1255]: 2024-12-13 03:45:54.050 [INFO][4151] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" Namespace="kube-system" Pod="coredns-76f75df574-hv9jf" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0"
Dec 13 03:45:54.069767 env[1255]: 2024-12-13 03:45:54.055 [INFO][4151] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" Namespace="kube-system" Pod="coredns-76f75df574-hv9jf" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2f24277a-80c8-4080-b170-650a6deabb6f", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 7, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9", Pod:"coredns-76f75df574-hv9jf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ef70ab4150", MAC:"02:bd:bf:b3:06:98", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:54.069767 env[1255]: 2024-12-13 03:45:54.066 [INFO][4151] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9" Namespace="kube-system" Pod="coredns-76f75df574-hv9jf" WorkloadEndpoint="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0"
Dec 13 03:45:54.095000 audit[4184]: NETFILTER_CFG table=filter:105 family=2 entries=42 op=nft_register_chain pid=4184 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Dec 13 03:45:54.098019 kernel: kauditd_printk_skb: 517 callbacks suppressed
Dec 13 03:45:54.099080 kernel: audit: type=1325 audit(1734061554.095:415): table=filter:105 family=2 entries=42 op=nft_register_chain pid=4184 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Dec 13 03:45:54.095000 audit[4184]: SYSCALL arch=c000003e syscall=46 success=yes exit=20580 a0=3 a1=7ffc233ea080 a2=0 a3=7ffc233ea06c items=0 ppid=3401 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:54.105661 kernel: audit: type=1300 audit(1734061554.095:415): arch=c000003e syscall=46 success=yes exit=20580 a0=3 a1=7ffc233ea080 a2=0 a3=7ffc233ea06c items=0 ppid=3401 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:54.095000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Dec 13 03:45:54.108901 kernel: audit: type=1327 audit(1734061554.095:415): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Dec 13 03:45:54.118941 env[1255]: time="2024-12-13T03:45:54.118890128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 03:45:54.119100 env[1255]: time="2024-12-13T03:45:54.119075245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 03:45:54.119195 env[1255]: time="2024-12-13T03:45:54.119172298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 03:45:54.120780 env[1255]: time="2024-12-13T03:45:54.120723622Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9 pid=4192 runtime=io.containerd.runc.v2
Dec 13 03:45:54.202423 env[1255]: time="2024-12-13T03:45:54.202376267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hv9jf,Uid:2f24277a-80c8-4080-b170-650a6deabb6f,Namespace:kube-system,Attempt:1,} returns sandbox id \"d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9\""
Dec 13 03:45:54.207129 env[1255]: time="2024-12-13T03:45:54.207101546Z" level=info msg="CreateContainer within sandbox \"d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Dec 13 03:45:54.242021 env[1255]: time="2024-12-13T03:45:54.241617353Z" level=info msg="CreateContainer within sandbox \"d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aab3f60c7957e4299d1f37a5018ce9b893362569596d8941bb1468cd2ebceb10\""
Dec 13 03:45:54.242282 env[1255]: time="2024-12-13T03:45:54.242253197Z" level=info msg="StartContainer for \"aab3f60c7957e4299d1f37a5018ce9b893362569596d8941bb1468cd2ebceb10\""
Dec 13 03:45:54.365285 env[1255]: time="2024-12-13T03:45:54.364775407Z" level=info msg="StartContainer for \"aab3f60c7957e4299d1f37a5018ce9b893362569596d8941bb1468cd2ebceb10\" returns successfully"
Dec 13 03:45:54.476000 audit[4266]: NETFILTER_CFG table=filter:106 family=2 entries=16 op=nft_register_rule pid=4266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:54.476000 audit[4266]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffda506f6c0 a2=0 a3=7ffda506f6ac items=0 ppid=2382 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:54.485527 kernel: audit: type=1325 audit(1734061554.476:416): table=filter:106 family=2 entries=16 op=nft_register_rule pid=4266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:54.485577 kernel: audit: type=1300 audit(1734061554.476:416): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffda506f6c0 a2=0 a3=7ffda506f6ac items=0 ppid=2382 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:54.485602 kernel: audit: type=1327 audit(1734061554.476:416): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:54.476000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:54.488000 audit[4266]: NETFILTER_CFG table=nat:107 family=2 entries=14 op=nft_register_rule pid=4266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:54.493367 kernel: audit: type=1325 audit(1734061554.488:417): table=nat:107 family=2 entries=14 op=nft_register_rule pid=4266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:54.488000 audit[4266]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffda506f6c0 a2=0 a3=0 items=0 ppid=2382 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:54.501859 kernel: audit: type=1300 audit(1734061554.488:417): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffda506f6c0 a2=0 a3=0 items=0 ppid=2382 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:54.501905 kernel: audit: type=1327 audit(1734061554.488:417): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:54.488000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:55.281727 kubelet[2205]: I1213 03:45:55.281648    2205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xpw7r" podStartSLOduration=48.281606522 podStartE2EDuration="48.281606522s" podCreationTimestamp="2024-12-13 03:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:45:54.264613076 +0000 UTC m=+57.043213118" watchObservedRunningTime="2024-12-13 03:45:55.281606522 +0000 UTC m=+58.060206554"
Dec 13 03:45:55.392718 kubelet[2205]: I1213 03:45:55.392662    2205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hv9jf" podStartSLOduration=48.392623248 podStartE2EDuration="48.392623248s" podCreationTimestamp="2024-12-13 03:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:45:55.287783477 +0000 UTC m=+58.066383519" watchObservedRunningTime="2024-12-13 03:45:55.392623248 +0000 UTC m=+58.171223280"
Dec 13 03:45:55.487000 audit[4268]: NETFILTER_CFG table=filter:108 family=2 entries=16 op=nft_register_rule pid=4268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:55.492362 kernel: audit: type=1325 audit(1734061555.487:418): table=filter:108 family=2 entries=16 op=nft_register_rule pid=4268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:55.487000 audit[4268]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe9441e2b0 a2=0 a3=7ffe9441e29c items=0 ppid=2382 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:55.487000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:55.493000 audit[4268]: NETFILTER_CFG table=nat:109 family=2 entries=14 op=nft_register_rule pid=4268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:55.493000 audit[4268]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe9441e2b0 a2=0 a3=0 items=0 ppid=2382 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:55.493000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:55.532000 audit[4271]: NETFILTER_CFG table=filter:110 family=2 entries=13 op=nft_register_rule pid=4271 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:55.532000 audit[4271]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff7ac05ed0 a2=0 a3=7fff7ac05ebc items=0 ppid=2382 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:55.532000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:55.569000 audit[4271]: NETFILTER_CFG table=nat:111 family=2 entries=47 op=nft_register_chain pid=4271 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:55.569000 audit[4271]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff7ac05ed0 a2=0 a3=7fff7ac05ebc items=0 ppid=2382 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:55.569000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:55.750240 env[1255]: time="2024-12-13T03:45:55.750073025Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:55.755799 env[1255]: time="2024-12-13T03:45:55.755722508Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:55.759409 env[1255]: time="2024-12-13T03:45:55.759295040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:55.761775 systemd-networkd[1020]: cali8ef70ab4150: Gained IPv6LL
Dec 13 03:45:55.773107 env[1255]: time="2024-12-13T03:45:55.773046725Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:55.774992 env[1255]: time="2024-12-13T03:45:55.774931957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\""
Dec 13 03:45:55.791592 env[1255]: time="2024-12-13T03:45:55.791524529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\""
Dec 13 03:45:55.801851 env[1255]: time="2024-12-13T03:45:55.801667184Z" level=info msg="CreateContainer within sandbox \"9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Dec 13 03:45:55.837701 env[1255]: time="2024-12-13T03:45:55.837575093Z" level=info msg="CreateContainer within sandbox \"9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"89c542faec98ce179d360c14b7f0201e8294317432c67ec021ffd0b8218b8118\""
Dec 13 03:45:55.841362 env[1255]: time="2024-12-13T03:45:55.841261008Z" level=info msg="StartContainer for \"89c542faec98ce179d360c14b7f0201e8294317432c67ec021ffd0b8218b8118\""
Dec 13 03:45:56.071193 env[1255]: time="2024-12-13T03:45:56.070945038Z" level=info msg="StartContainer for \"89c542faec98ce179d360c14b7f0201e8294317432c67ec021ffd0b8218b8118\" returns successfully"
Dec 13 03:45:56.352560 kubelet[2205]: I1213 03:45:56.352293    2205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-574c4c684d-2dhzt" podStartSLOduration=34.689446297 podStartE2EDuration="40.352048054s" podCreationTimestamp="2024-12-13 03:45:16 +0000 UTC" firstStartedPulling="2024-12-13 03:45:50.115420185 +0000 UTC m=+52.894020217" lastFinishedPulling="2024-12-13 03:45:55.778021892 +0000 UTC m=+58.556621974" observedRunningTime="2024-12-13 03:45:56.351429552 +0000 UTC m=+59.130029594" watchObservedRunningTime="2024-12-13 03:45:56.352048054 +0000 UTC m=+59.130648177"
Dec 13 03:45:56.420000 audit[4318]: NETFILTER_CFG table=filter:112 family=2 entries=10 op=nft_register_rule pid=4318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:56.420000 audit[4318]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc57422b80 a2=0 a3=7ffc57422b6c items=0 ppid=2382 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:56.420000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:56.425000 audit[4318]: NETFILTER_CFG table=nat:113 family=2 entries=20 op=nft_register_rule pid=4318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:56.425000 audit[4318]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc57422b80 a2=0 a3=7ffc57422b6c items=0 ppid=2382 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:56.425000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:56.438564 env[1255]: time="2024-12-13T03:45:56.438508501Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:56.441492 env[1255]: time="2024-12-13T03:45:56.441434187Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:56.443398 env[1255]: time="2024-12-13T03:45:56.443366107Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:56.445833 env[1255]: time="2024-12-13T03:45:56.445797053Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:45:56.446569 env[1255]: time="2024-12-13T03:45:56.446512728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\""
Dec 13 03:45:56.451138 env[1255]: time="2024-12-13T03:45:56.451072143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\""
Dec 13 03:45:56.454553 env[1255]: time="2024-12-13T03:45:56.454513398Z" level=info msg="CreateContainer within sandbox \"3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Dec 13 03:45:56.475464 env[1255]: time="2024-12-13T03:45:56.475419413Z" level=info msg="CreateContainer within sandbox \"3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"83182e4ccae879afc6105481a87a68a1901396112d89f81c81f14de41365215a\""
Dec 13 03:45:56.477445 env[1255]: time="2024-12-13T03:45:56.477413609Z" level=info msg="StartContainer for \"83182e4ccae879afc6105481a87a68a1901396112d89f81c81f14de41365215a\""
Dec 13 03:45:56.563973 env[1255]: time="2024-12-13T03:45:56.563920894Z" level=info msg="StartContainer for \"83182e4ccae879afc6105481a87a68a1901396112d89f81c81f14de41365215a\" returns successfully"
Dec 13 03:45:57.282800 kubelet[2205]: I1213 03:45:57.282770    2205 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Dec 13 03:45:57.447000 audit[4355]: NETFILTER_CFG table=filter:114 family=2 entries=10 op=nft_register_rule pid=4355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:57.447000 audit[4355]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd08381050 a2=0 a3=7ffd0838103c items=0 ppid=2382 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:57.447000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:57.454000 audit[4355]: NETFILTER_CFG table=nat:115 family=2 entries=20 op=nft_register_rule pid=4355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:45:57.454000 audit[4355]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd08381050 a2=0 a3=7ffd0838103c items=0 ppid=2382 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:45:57.454000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:45:57.589789 env[1255]: time="2024-12-13T03:45:57.589612750Z" level=info msg="StopPodSandbox for \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\""
Dec 13 03:45:57.816617 env[1255]: 2024-12-13 03:45:57.740 [WARNING][4370] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0", GenerateName:"calico-kube-controllers-59bc466bbc-", Namespace:"calico-system", SelfLink:"", UID:"bc3d7731-763a-422a-9334-38f54b5935d1", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59bc466bbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982", Pod:"calico-kube-controllers-59bc466bbc-4dbd6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali32f239bdae6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:57.816617 env[1255]: 2024-12-13 03:45:57.743 [INFO][4370] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf"
Dec 13 03:45:57.816617 env[1255]: 2024-12-13 03:45:57.743 [INFO][4370] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" iface="eth0" netns=""
Dec 13 03:45:57.816617 env[1255]: 2024-12-13 03:45:57.743 [INFO][4370] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf"
Dec 13 03:45:57.816617 env[1255]: 2024-12-13 03:45:57.743 [INFO][4370] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf"
Dec 13 03:45:57.816617 env[1255]: 2024-12-13 03:45:57.794 [INFO][4376] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" HandleID="k8s-pod-network.2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0"
Dec 13 03:45:57.816617 env[1255]: 2024-12-13 03:45:57.795 [INFO][4376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:57.816617 env[1255]: 2024-12-13 03:45:57.796 [INFO][4376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:57.816617 env[1255]: 2024-12-13 03:45:57.811 [WARNING][4376] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" HandleID="k8s-pod-network.2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0"
Dec 13 03:45:57.816617 env[1255]: 2024-12-13 03:45:57.811 [INFO][4376] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" HandleID="k8s-pod-network.2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0"
Dec 13 03:45:57.816617 env[1255]: 2024-12-13 03:45:57.813 [INFO][4376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:57.816617 env[1255]: 2024-12-13 03:45:57.814 [INFO][4370] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf"
Dec 13 03:45:57.817205 env[1255]: time="2024-12-13T03:45:57.817172784Z" level=info msg="TearDown network for sandbox \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\" successfully"
Dec 13 03:45:57.817281 env[1255]: time="2024-12-13T03:45:57.817261811Z" level=info msg="StopPodSandbox for \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\" returns successfully"
Dec 13 03:45:57.821395 env[1255]: time="2024-12-13T03:45:57.821367424Z" level=info msg="RemovePodSandbox for \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\""
Dec 13 03:45:57.821617 env[1255]: time="2024-12-13T03:45:57.821565276Z" level=info msg="Forcibly stopping sandbox \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\""
Dec 13 03:45:58.084853 env[1255]: 2024-12-13 03:45:57.889 [WARNING][4396] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0", GenerateName:"calico-kube-controllers-59bc466bbc-", Namespace:"calico-system", SelfLink:"", UID:"bc3d7731-763a-422a-9334-38f54b5935d1", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59bc466bbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982", Pod:"calico-kube-controllers-59bc466bbc-4dbd6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali32f239bdae6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:58.084853 env[1255]: 2024-12-13 03:45:57.890 [INFO][4396] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf"
Dec 13 03:45:58.084853 env[1255]: 2024-12-13 03:45:57.890 [INFO][4396] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" iface="eth0" netns=""
Dec 13 03:45:58.084853 env[1255]: 2024-12-13 03:45:57.890 [INFO][4396] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf"
Dec 13 03:45:58.084853 env[1255]: 2024-12-13 03:45:57.890 [INFO][4396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf"
Dec 13 03:45:58.084853 env[1255]: 2024-12-13 03:45:57.994 [INFO][4404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" HandleID="k8s-pod-network.2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0"
Dec 13 03:45:58.084853 env[1255]: 2024-12-13 03:45:57.994 [INFO][4404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:58.084853 env[1255]: 2024-12-13 03:45:57.995 [INFO][4404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:58.084853 env[1255]: 2024-12-13 03:45:58.022 [WARNING][4404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" HandleID="k8s-pod-network.2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0"
Dec 13 03:45:58.084853 env[1255]: 2024-12-13 03:45:58.023 [INFO][4404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" HandleID="k8s-pod-network.2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--kube--controllers--59bc466bbc--4dbd6-eth0"
Dec 13 03:45:58.084853 env[1255]: 2024-12-13 03:45:58.045 [INFO][4404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:58.084853 env[1255]: 2024-12-13 03:45:58.067 [INFO][4396] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf"
Dec 13 03:45:58.089641 env[1255]: time="2024-12-13T03:45:58.085304438Z" level=info msg="TearDown network for sandbox \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\" successfully"
Dec 13 03:45:58.113218 env[1255]: time="2024-12-13T03:45:58.113075055Z" level=info msg="RemovePodSandbox \"2c032f1d767c13e6524c5a59fab97b5e18bcf8e799e1e84deae2fca0eced5abf\" returns successfully"
Dec 13 03:45:58.127921 env[1255]: time="2024-12-13T03:45:58.127887410Z" level=info msg="StopPodSandbox for \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\""
Dec 13 03:45:58.279630 kubelet[2205]: I1213 03:45:58.279023    2205 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Dec 13 03:45:58.327433 env[1255]: 2024-12-13 03:45:58.248 [WARNING][4424] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0", GenerateName:"calico-apiserver-574c4c684d-", Namespace:"calico-apiserver", SelfLink:"", UID:"35fca77a-f366-4df2-8058-b9331e4164b4", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 16, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574c4c684d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f", Pod:"calico-apiserver-574c4c684d-2dhzt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0c8151690b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:58.327433 env[1255]: 2024-12-13 03:45:58.248 [INFO][4424] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762"
Dec 13 03:45:58.327433 env[1255]: 2024-12-13 03:45:58.248 [INFO][4424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" iface="eth0" netns=""
Dec 13 03:45:58.327433 env[1255]: 2024-12-13 03:45:58.248 [INFO][4424] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762"
Dec 13 03:45:58.327433 env[1255]: 2024-12-13 03:45:58.248 [INFO][4424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762"
Dec 13 03:45:58.327433 env[1255]: 2024-12-13 03:45:58.280 [INFO][4430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" HandleID="k8s-pod-network.9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0"
Dec 13 03:45:58.327433 env[1255]: 2024-12-13 03:45:58.280 [INFO][4430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:58.327433 env[1255]: 2024-12-13 03:45:58.280 [INFO][4430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:58.327433 env[1255]: 2024-12-13 03:45:58.314 [WARNING][4430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" HandleID="k8s-pod-network.9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0"
Dec 13 03:45:58.327433 env[1255]: 2024-12-13 03:45:58.314 [INFO][4430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" HandleID="k8s-pod-network.9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0"
Dec 13 03:45:58.327433 env[1255]: 2024-12-13 03:45:58.319 [INFO][4430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:58.327433 env[1255]: 2024-12-13 03:45:58.321 [INFO][4424] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762"
Dec 13 03:45:58.327986 env[1255]: time="2024-12-13T03:45:58.327951589Z" level=info msg="TearDown network for sandbox \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\" successfully"
Dec 13 03:45:58.328065 env[1255]: time="2024-12-13T03:45:58.328045315Z" level=info msg="StopPodSandbox for \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\" returns successfully"
Dec 13 03:45:58.328522 env[1255]: time="2024-12-13T03:45:58.328501572Z" level=info msg="RemovePodSandbox for \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\""
Dec 13 03:45:58.328639 env[1255]: time="2024-12-13T03:45:58.328599886Z" level=info msg="Forcibly stopping sandbox \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\""
Dec 13 03:45:58.558997 env[1255]: 2024-12-13 03:45:58.485 [WARNING][4449] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0", GenerateName:"calico-apiserver-574c4c684d-", Namespace:"calico-apiserver", SelfLink:"", UID:"35fca77a-f366-4df2-8058-b9331e4164b4", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 16, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574c4c684d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"9575a1fc9d65bf784544036c9b5d7c5db7ac0d1cd85397d861e579266218177f", Pod:"calico-apiserver-574c4c684d-2dhzt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0c8151690b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:58.558997 env[1255]: 2024-12-13 03:45:58.492 [INFO][4449] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762"
Dec 13 03:45:58.558997 env[1255]: 2024-12-13 03:45:58.492 [INFO][4449] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" iface="eth0" netns=""
Dec 13 03:45:58.558997 env[1255]: 2024-12-13 03:45:58.492 [INFO][4449] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762"
Dec 13 03:45:58.558997 env[1255]: 2024-12-13 03:45:58.492 [INFO][4449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762"
Dec 13 03:45:58.558997 env[1255]: 2024-12-13 03:45:58.546 [INFO][4455] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" HandleID="k8s-pod-network.9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0"
Dec 13 03:45:58.558997 env[1255]: 2024-12-13 03:45:58.546 [INFO][4455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:58.558997 env[1255]: 2024-12-13 03:45:58.546 [INFO][4455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:58.558997 env[1255]: 2024-12-13 03:45:58.553 [WARNING][4455] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" HandleID="k8s-pod-network.9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0"
Dec 13 03:45:58.558997 env[1255]: 2024-12-13 03:45:58.553 [INFO][4455] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" HandleID="k8s-pod-network.9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--2dhzt-eth0"
Dec 13 03:45:58.558997 env[1255]: 2024-12-13 03:45:58.555 [INFO][4455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:58.558997 env[1255]: 2024-12-13 03:45:58.557 [INFO][4449] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762"
Dec 13 03:45:58.560090 env[1255]: time="2024-12-13T03:45:58.559378382Z" level=info msg="TearDown network for sandbox \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\" successfully"
Dec 13 03:45:58.564244 env[1255]: time="2024-12-13T03:45:58.564217842Z" level=info msg="RemovePodSandbox \"9bfd7ec2a9d54efa598a3dd21c9c943e584f842791cfc9a506a651f22c108762\" returns successfully"
Dec 13 03:45:58.564755 env[1255]: time="2024-12-13T03:45:58.564733802Z" level=info msg="StopPodSandbox for \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\""
Dec 13 03:45:58.677266 env[1255]: 2024-12-13 03:45:58.630 [WARNING][4476] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c5bfdb85-3d00-442f-80af-ff6cf6916b5c", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 7, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722", Pod:"coredns-76f75df574-xpw7r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia73719774c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:58.677266 env[1255]: 2024-12-13 03:45:58.630 [INFO][4476] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b"
Dec 13 03:45:58.677266 env[1255]: 2024-12-13 03:45:58.630 [INFO][4476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" iface="eth0" netns=""
Dec 13 03:45:58.677266 env[1255]: 2024-12-13 03:45:58.630 [INFO][4476] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b"
Dec 13 03:45:58.677266 env[1255]: 2024-12-13 03:45:58.630 [INFO][4476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b"
Dec 13 03:45:58.677266 env[1255]: 2024-12-13 03:45:58.665 [INFO][4483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" HandleID="k8s-pod-network.7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0"
Dec 13 03:45:58.677266 env[1255]: 2024-12-13 03:45:58.665 [INFO][4483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:58.677266 env[1255]: 2024-12-13 03:45:58.665 [INFO][4483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:58.677266 env[1255]: 2024-12-13 03:45:58.672 [WARNING][4483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" HandleID="k8s-pod-network.7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0"
Dec 13 03:45:58.677266 env[1255]: 2024-12-13 03:45:58.672 [INFO][4483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" HandleID="k8s-pod-network.7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0"
Dec 13 03:45:58.677266 env[1255]: 2024-12-13 03:45:58.674 [INFO][4483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:58.677266 env[1255]: 2024-12-13 03:45:58.675 [INFO][4476] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b"
Dec 13 03:45:58.760227 env[1255]: time="2024-12-13T03:45:58.678015510Z" level=info msg="TearDown network for sandbox \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\" successfully"
Dec 13 03:45:58.760227 env[1255]: time="2024-12-13T03:45:58.678048132Z" level=info msg="StopPodSandbox for \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\" returns successfully"
Dec 13 03:45:58.760227 env[1255]: time="2024-12-13T03:45:58.678482438Z" level=info msg="RemovePodSandbox for \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\""
Dec 13 03:45:58.760227 env[1255]: time="2024-12-13T03:45:58.678509459Z" level=info msg="Forcibly stopping sandbox \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\""
Dec 13 03:45:58.931039 env[1255]: 2024-12-13 03:45:58.892 [WARNING][4504] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c5bfdb85-3d00-442f-80af-ff6cf6916b5c", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 7, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"e72c0f629374b96ab60d4f559dbac8143efca17ad5c59c27aed952efb4c15722", Pod:"coredns-76f75df574-xpw7r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia73719774c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:58.931039 env[1255]: 2024-12-13 03:45:58.892 [INFO][4504] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b"
Dec 13 03:45:58.931039 env[1255]: 2024-12-13 03:45:58.892 [INFO][4504] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" iface="eth0" netns=""
Dec 13 03:45:58.931039 env[1255]: 2024-12-13 03:45:58.892 [INFO][4504] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b"
Dec 13 03:45:58.931039 env[1255]: 2024-12-13 03:45:58.892 [INFO][4504] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b"
Dec 13 03:45:58.931039 env[1255]: 2024-12-13 03:45:58.916 [INFO][4510] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" HandleID="k8s-pod-network.7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0"
Dec 13 03:45:58.931039 env[1255]: 2024-12-13 03:45:58.916 [INFO][4510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:58.931039 env[1255]: 2024-12-13 03:45:58.916 [INFO][4510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:58.931039 env[1255]: 2024-12-13 03:45:58.925 [WARNING][4510] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" HandleID="k8s-pod-network.7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0"
Dec 13 03:45:58.931039 env[1255]: 2024-12-13 03:45:58.925 [INFO][4510] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" HandleID="k8s-pod-network.7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--xpw7r-eth0"
Dec 13 03:45:58.931039 env[1255]: 2024-12-13 03:45:58.927 [INFO][4510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:58.931039 env[1255]: 2024-12-13 03:45:58.929 [INFO][4504] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b"
Dec 13 03:45:58.932017 env[1255]: time="2024-12-13T03:45:58.931982635Z" level=info msg="TearDown network for sandbox \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\" successfully"
Dec 13 03:45:58.940006 env[1255]: time="2024-12-13T03:45:58.939968256Z" level=info msg="RemovePodSandbox \"7f1a06fa417b8d96b5562e442e90168a219a91b1a1f0dcfe35dcb7d3cb5ddf2b\" returns successfully"
Dec 13 03:45:58.940636 env[1255]: time="2024-12-13T03:45:58.940613819Z" level=info msg="StopPodSandbox for \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\""
Dec 13 03:45:59.078374 env[1255]: 2024-12-13 03:45:59.004 [WARNING][4529] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0", GenerateName:"calico-apiserver-574c4c684d-", Namespace:"calico-apiserver", SelfLink:"", UID:"fd36f021-0ca4-4c3d-84a7-4ab0c0604448", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 16, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574c4c684d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02", Pod:"calico-apiserver-574c4c684d-8pf6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie445b24f2ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:59.078374 env[1255]: 2024-12-13 03:45:59.004 [INFO][4529] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79"
Dec 13 03:45:59.078374 env[1255]: 2024-12-13 03:45:59.004 [INFO][4529] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" iface="eth0" netns=""
Dec 13 03:45:59.078374 env[1255]: 2024-12-13 03:45:59.004 [INFO][4529] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79"
Dec 13 03:45:59.078374 env[1255]: 2024-12-13 03:45:59.004 [INFO][4529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79"
Dec 13 03:45:59.078374 env[1255]: 2024-12-13 03:45:59.048 [INFO][4535] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" HandleID="k8s-pod-network.51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0"
Dec 13 03:45:59.078374 env[1255]: 2024-12-13 03:45:59.048 [INFO][4535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:59.078374 env[1255]: 2024-12-13 03:45:59.048 [INFO][4535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:59.078374 env[1255]: 2024-12-13 03:45:59.072 [WARNING][4535] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" HandleID="k8s-pod-network.51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0"
Dec 13 03:45:59.078374 env[1255]: 2024-12-13 03:45:59.072 [INFO][4535] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" HandleID="k8s-pod-network.51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0"
Dec 13 03:45:59.078374 env[1255]: 2024-12-13 03:45:59.074 [INFO][4535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:59.078374 env[1255]: 2024-12-13 03:45:59.076 [INFO][4529] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79"
Dec 13 03:45:59.078969 env[1255]: time="2024-12-13T03:45:59.078936452Z" level=info msg="TearDown network for sandbox \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\" successfully"
Dec 13 03:45:59.079083 env[1255]: time="2024-12-13T03:45:59.079061226Z" level=info msg="StopPodSandbox for \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\" returns successfully"
Dec 13 03:45:59.079591 env[1255]: time="2024-12-13T03:45:59.079571344Z" level=info msg="RemovePodSandbox for \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\""
Dec 13 03:45:59.079700 env[1255]: time="2024-12-13T03:45:59.079663828Z" level=info msg="Forcibly stopping sandbox \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\""
Dec 13 03:45:59.371247 env[1255]: 2024-12-13 03:45:59.307 [WARNING][4557] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0", GenerateName:"calico-apiserver-574c4c684d-", Namespace:"calico-apiserver", SelfLink:"", UID:"fd36f021-0ca4-4c3d-84a7-4ab0c0604448", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 16, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574c4c684d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"3148c0d4413785a9b8fed152471709decb40419a395babf43ec74a31f9df7b02", Pod:"calico-apiserver-574c4c684d-8pf6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie445b24f2ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:59.371247 env[1255]: 2024-12-13 03:45:59.308 [INFO][4557] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79"
Dec 13 03:45:59.371247 env[1255]: 2024-12-13 03:45:59.308 [INFO][4557] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" iface="eth0" netns=""
Dec 13 03:45:59.371247 env[1255]: 2024-12-13 03:45:59.308 [INFO][4557] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79"
Dec 13 03:45:59.371247 env[1255]: 2024-12-13 03:45:59.308 [INFO][4557] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79"
Dec 13 03:45:59.371247 env[1255]: 2024-12-13 03:45:59.354 [INFO][4563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" HandleID="k8s-pod-network.51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0"
Dec 13 03:45:59.371247 env[1255]: 2024-12-13 03:45:59.354 [INFO][4563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:59.371247 env[1255]: 2024-12-13 03:45:59.354 [INFO][4563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:59.371247 env[1255]: 2024-12-13 03:45:59.361 [WARNING][4563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" HandleID="k8s-pod-network.51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0"
Dec 13 03:45:59.371247 env[1255]: 2024-12-13 03:45:59.361 [INFO][4563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" HandleID="k8s-pod-network.51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-calico--apiserver--574c4c684d--8pf6t-eth0"
Dec 13 03:45:59.371247 env[1255]: 2024-12-13 03:45:59.363 [INFO][4563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:59.371247 env[1255]: 2024-12-13 03:45:59.369 [INFO][4557] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79"
Dec 13 03:45:59.371875 env[1255]: time="2024-12-13T03:45:59.371845000Z" level=info msg="TearDown network for sandbox \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\" successfully"
Dec 13 03:45:59.664467 env[1255]: time="2024-12-13T03:45:59.663968142Z" level=info msg="RemovePodSandbox \"51e7027f0566a3dd6995f890ad612e7844b2dcc15849adb3bc9cd07d9f2ccf79\" returns successfully"
Dec 13 03:45:59.666520 env[1255]: time="2024-12-13T03:45:59.666458990Z" level=info msg="StopPodSandbox for \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\""
Dec 13 03:45:59.808814 env[1255]: 2024-12-13 03:45:59.766 [WARNING][4584] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e67c0431-ca4e-483a-b78f-aa6377b70035", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6", Pod:"csi-node-driver-v5c5r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibde4ba64b7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:59.808814 env[1255]: 2024-12-13 03:45:59.766 [INFO][4584] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457"
Dec 13 03:45:59.808814 env[1255]: 2024-12-13 03:45:59.766 [INFO][4584] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" iface="eth0" netns=""
Dec 13 03:45:59.808814 env[1255]: 2024-12-13 03:45:59.766 [INFO][4584] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457"
Dec 13 03:45:59.808814 env[1255]: 2024-12-13 03:45:59.766 [INFO][4584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457"
Dec 13 03:45:59.808814 env[1255]: 2024-12-13 03:45:59.793 [INFO][4590] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" HandleID="k8s-pod-network.38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0"
Dec 13 03:45:59.808814 env[1255]: 2024-12-13 03:45:59.793 [INFO][4590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:59.808814 env[1255]: 2024-12-13 03:45:59.793 [INFO][4590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:59.808814 env[1255]: 2024-12-13 03:45:59.803 [WARNING][4590] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" HandleID="k8s-pod-network.38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0"
Dec 13 03:45:59.808814 env[1255]: 2024-12-13 03:45:59.803 [INFO][4590] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" HandleID="k8s-pod-network.38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0"
Dec 13 03:45:59.808814 env[1255]: 2024-12-13 03:45:59.805 [INFO][4590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:59.808814 env[1255]: 2024-12-13 03:45:59.807 [INFO][4584] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457"
Dec 13 03:45:59.810306 env[1255]: time="2024-12-13T03:45:59.808837448Z" level=info msg="TearDown network for sandbox \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\" successfully"
Dec 13 03:45:59.810306 env[1255]: time="2024-12-13T03:45:59.808868056Z" level=info msg="StopPodSandbox for \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\" returns successfully"
Dec 13 03:45:59.810306 env[1255]: time="2024-12-13T03:45:59.809307030Z" level=info msg="RemovePodSandbox for \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\""
Dec 13 03:45:59.810306 env[1255]: time="2024-12-13T03:45:59.809352596Z" level=info msg="Forcibly stopping sandbox \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\""
Dec 13 03:45:59.929121 env[1255]: 2024-12-13 03:45:59.886 [WARNING][4609] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e67c0431-ca4e-483a-b78f-aa6377b70035", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 17, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6", Pod:"csi-node-driver-v5c5r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibde4ba64b7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:45:59.929121 env[1255]: 2024-12-13 03:45:59.886 [INFO][4609] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457"
Dec 13 03:45:59.929121 env[1255]: 2024-12-13 03:45:59.886 [INFO][4609] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" iface="eth0" netns=""
Dec 13 03:45:59.929121 env[1255]: 2024-12-13 03:45:59.886 [INFO][4609] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457"
Dec 13 03:45:59.929121 env[1255]: 2024-12-13 03:45:59.886 [INFO][4609] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457"
Dec 13 03:45:59.929121 env[1255]: 2024-12-13 03:45:59.915 [INFO][4616] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" HandleID="k8s-pod-network.38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0"
Dec 13 03:45:59.929121 env[1255]: 2024-12-13 03:45:59.915 [INFO][4616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:45:59.929121 env[1255]: 2024-12-13 03:45:59.915 [INFO][4616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:45:59.929121 env[1255]: 2024-12-13 03:45:59.924 [WARNING][4616] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" HandleID="k8s-pod-network.38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0"
Dec 13 03:45:59.929121 env[1255]: 2024-12-13 03:45:59.924 [INFO][4616] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" HandleID="k8s-pod-network.38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-csi--node--driver--v5c5r-eth0"
Dec 13 03:45:59.929121 env[1255]: 2024-12-13 03:45:59.925 [INFO][4616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:45:59.929121 env[1255]: 2024-12-13 03:45:59.927 [INFO][4609] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457"
Dec 13 03:45:59.930380 env[1255]: time="2024-12-13T03:45:59.929536883Z" level=info msg="TearDown network for sandbox \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\" successfully"
Dec 13 03:45:59.933635 env[1255]: time="2024-12-13T03:45:59.933606047Z" level=info msg="RemovePodSandbox \"38ce53fb72a868499f74f3186f9878d2e5ef806645dcc5e3e5bf901a6bc20457\" returns successfully"
Dec 13 03:45:59.934165 env[1255]: time="2024-12-13T03:45:59.934142996Z" level=info msg="StopPodSandbox for \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\""
Dec 13 03:46:00.050241 env[1255]: 2024-12-13 03:46:00.006 [WARNING][4636] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2f24277a-80c8-4080-b170-650a6deabb6f", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 7, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9", Pod:"coredns-76f75df574-hv9jf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ef70ab4150", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:46:00.050241 env[1255]: 2024-12-13 03:46:00.006 [INFO][4636] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc"
Dec 13 03:46:00.050241 env[1255]: 2024-12-13 03:46:00.006 [INFO][4636] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" iface="eth0" netns=""
Dec 13 03:46:00.050241 env[1255]: 2024-12-13 03:46:00.006 [INFO][4636] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc"
Dec 13 03:46:00.050241 env[1255]: 2024-12-13 03:46:00.006 [INFO][4636] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc"
Dec 13 03:46:00.050241 env[1255]: 2024-12-13 03:46:00.038 [INFO][4642] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" HandleID="k8s-pod-network.783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0"
Dec 13 03:46:00.050241 env[1255]: 2024-12-13 03:46:00.038 [INFO][4642] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:46:00.050241 env[1255]: 2024-12-13 03:46:00.038 [INFO][4642] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:46:00.050241 env[1255]: 2024-12-13 03:46:00.045 [WARNING][4642] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" HandleID="k8s-pod-network.783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0"
Dec 13 03:46:00.050241 env[1255]: 2024-12-13 03:46:00.045 [INFO][4642] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" HandleID="k8s-pod-network.783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0"
Dec 13 03:46:00.050241 env[1255]: 2024-12-13 03:46:00.047 [INFO][4642] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:46:00.050241 env[1255]: 2024-12-13 03:46:00.048 [INFO][4636] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc"
Dec 13 03:46:00.050889 env[1255]: time="2024-12-13T03:46:00.050857252Z" level=info msg="TearDown network for sandbox \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\" successfully"
Dec 13 03:46:00.050971 env[1255]: time="2024-12-13T03:46:00.050950647Z" level=info msg="StopPodSandbox for \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\" returns successfully"
Dec 13 03:46:00.051471 env[1255]: time="2024-12-13T03:46:00.051450998Z" level=info msg="RemovePodSandbox for \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\""
Dec 13 03:46:00.051586 env[1255]: time="2024-12-13T03:46:00.051543451Z" level=info msg="Forcibly stopping sandbox \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\""
Dec 13 03:46:00.490972 env[1255]: 2024-12-13 03:46:00.417 [WARNING][4661] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2f24277a-80c8-4080-b170-650a6deabb6f", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 3, 45, 7, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-b-896f86a818.novalocal", ContainerID:"d0e631875074171b48b6b35f5cb3fed490fc1b5901dba81dba375154d56984d9", Pod:"coredns-76f75df574-hv9jf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ef70ab4150", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 03:46:00.490972 env[1255]: 2024-12-13 03:46:00.417 [INFO][4661] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc"
Dec 13 03:46:00.490972 env[1255]: 2024-12-13 03:46:00.417 [INFO][4661] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" iface="eth0" netns=""
Dec 13 03:46:00.490972 env[1255]: 2024-12-13 03:46:00.418 [INFO][4661] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc"
Dec 13 03:46:00.490972 env[1255]: 2024-12-13 03:46:00.418 [INFO][4661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc"
Dec 13 03:46:00.490972 env[1255]: 2024-12-13 03:46:00.471 [INFO][4668] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" HandleID="k8s-pod-network.783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0"
Dec 13 03:46:00.490972 env[1255]: 2024-12-13 03:46:00.472 [INFO][4668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 03:46:00.490972 env[1255]: 2024-12-13 03:46:00.472 [INFO][4668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 03:46:00.490972 env[1255]: 2024-12-13 03:46:00.481 [WARNING][4668] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" HandleID="k8s-pod-network.783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0"
Dec 13 03:46:00.490972 env[1255]: 2024-12-13 03:46:00.482 [INFO][4668] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" HandleID="k8s-pod-network.783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc" Workload="ci--3510--3--6--b--896f86a818.novalocal-k8s-coredns--76f75df574--hv9jf-eth0"
Dec 13 03:46:00.490972 env[1255]: 2024-12-13 03:46:00.484 [INFO][4668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 03:46:00.490972 env[1255]: 2024-12-13 03:46:00.489 [INFO][4661] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc"
Dec 13 03:46:00.492250 env[1255]: time="2024-12-13T03:46:00.492183014Z" level=info msg="TearDown network for sandbox \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\" successfully"
Dec 13 03:46:00.593176 env[1255]: time="2024-12-13T03:46:00.593075217Z" level=info msg="RemovePodSandbox \"783d21456806573b43c5fe6a8df8ed9c78e5406aaec77df108eb9f6f65cc42dc\" returns successfully"
Dec 13 03:46:00.598449 env[1255]: time="2024-12-13T03:46:00.598392666Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:46:00.602386 env[1255]: time="2024-12-13T03:46:00.602305064Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:46:00.606064 env[1255]: time="2024-12-13T03:46:00.606017909Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:46:00.609277 env[1255]: time="2024-12-13T03:46:00.609230142Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:46:00.611040 env[1255]: time="2024-12-13T03:46:00.610971143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\""
Dec 13 03:46:00.612136 env[1255]: time="2024-12-13T03:46:00.612088732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\""
Dec 13 03:46:00.677925 env[1255]: time="2024-12-13T03:46:00.677496231Z" level=info msg="CreateContainer within sandbox \"d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}"
Dec 13 03:46:00.694776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount969405433.mount: Deactivated successfully.
Dec 13 03:46:00.700223 env[1255]: time="2024-12-13T03:46:00.700190068Z" level=info msg="CreateContainer within sandbox \"d80c58112d896e080603d258fe05db1d66a274e567dd6381043e03eb89d2b982\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b2206b54f3264dc55ef360ba9582d8ada02193ba367f1afb978c0fc1cd701356\""
Dec 13 03:46:00.701031 env[1255]: time="2024-12-13T03:46:00.701010390Z" level=info msg="StartContainer for \"b2206b54f3264dc55ef360ba9582d8ada02193ba367f1afb978c0fc1cd701356\""
Dec 13 03:46:00.797995 env[1255]: time="2024-12-13T03:46:00.797817729Z" level=info msg="StartContainer for \"b2206b54f3264dc55ef360ba9582d8ada02193ba367f1afb978c0fc1cd701356\" returns successfully"
Dec 13 03:46:01.337457 kubelet[2205]: I1213 03:46:01.336852    2205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59bc466bbc-4dbd6" podStartSLOduration=35.158915954 podStartE2EDuration="44.336673428s" podCreationTimestamp="2024-12-13 03:45:17 +0000 UTC" firstStartedPulling="2024-12-13 03:45:51.433892454 +0000 UTC m=+54.212492496" lastFinishedPulling="2024-12-13 03:46:00.611649938 +0000 UTC m=+63.390249970" observedRunningTime="2024-12-13 03:46:01.3322925 +0000 UTC m=+64.110892582" watchObservedRunningTime="2024-12-13 03:46:01.336673428 +0000 UTC m=+64.115273510"
Dec 13 03:46:01.338634 kubelet[2205]: I1213 03:46:01.337635    2205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-574c4c684d-8pf6t" podStartSLOduration=39.519671644 podStartE2EDuration="45.337539825s" podCreationTimestamp="2024-12-13 03:45:16 +0000 UTC" firstStartedPulling="2024-12-13 03:45:50.629789778 +0000 UTC m=+53.408389810" lastFinishedPulling="2024-12-13 03:45:56.447657909 +0000 UTC m=+59.226257991" observedRunningTime="2024-12-13 03:45:57.305682532 +0000 UTC m=+60.084282574" watchObservedRunningTime="2024-12-13 03:46:01.337539825 +0000 UTC m=+64.116139927"
Dec 13 03:46:02.624313 env[1255]: time="2024-12-13T03:46:02.624239901Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:46:02.628648 env[1255]: time="2024-12-13T03:46:02.628614618Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:46:02.632110 env[1255]: time="2024-12-13T03:46:02.632079917Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:46:02.635031 env[1255]: time="2024-12-13T03:46:02.634991606Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:46:02.636215 env[1255]: time="2024-12-13T03:46:02.636184877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\""
Dec 13 03:46:02.645395 env[1255]: time="2024-12-13T03:46:02.645354770Z" level=info msg="CreateContainer within sandbox \"009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Dec 13 03:46:02.668023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4025098202.mount: Deactivated successfully.
Dec 13 03:46:02.676866 env[1255]: time="2024-12-13T03:46:02.676807596Z" level=info msg="CreateContainer within sandbox \"009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"70fa776f1d6aef9f3f53c54f256c9f2ebd082dd32751ffa525497b70b5a335b3\""
Dec 13 03:46:02.677696 env[1255]: time="2024-12-13T03:46:02.677671981Z" level=info msg="StartContainer for \"70fa776f1d6aef9f3f53c54f256c9f2ebd082dd32751ffa525497b70b5a335b3\""
Dec 13 03:46:02.769741 env[1255]: time="2024-12-13T03:46:02.769700153Z" level=info msg="StartContainer for \"70fa776f1d6aef9f3f53c54f256c9f2ebd082dd32751ffa525497b70b5a335b3\" returns successfully"
Dec 13 03:46:02.772081 env[1255]: time="2024-12-13T03:46:02.772055055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\""
Dec 13 03:46:03.664507 systemd[1]: run-containerd-runc-k8s.io-70fa776f1d6aef9f3f53c54f256c9f2ebd082dd32751ffa525497b70b5a335b3-runc.Jetewf.mount: Deactivated successfully.
Dec 13 03:46:05.581319 env[1255]: time="2024-12-13T03:46:05.581225614Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:46:05.584010 env[1255]: time="2024-12-13T03:46:05.583955001Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:46:05.585983 env[1255]: time="2024-12-13T03:46:05.585915192Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:46:05.589135 env[1255]: time="2024-12-13T03:46:05.589079906Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 03:46:05.589802 env[1255]: time="2024-12-13T03:46:05.589771986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\""
Dec 13 03:46:05.596009 env[1255]: time="2024-12-13T03:46:05.595904984Z" level=info msg="CreateContainer within sandbox \"009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Dec 13 03:46:05.629442 env[1255]: time="2024-12-13T03:46:05.629403267Z" level=info msg="CreateContainer within sandbox \"009bc298d01a17c3e834255b12db3590cc0dd1a7bcc26ae87a8ab06159c190f6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2cc9ad308c685740e944b79645c9d085dc95475e66d2586287f74b44d140e0fb\""
Dec 13 03:46:05.630420 env[1255]: time="2024-12-13T03:46:05.630393166Z" level=info msg="StartContainer for \"2cc9ad308c685740e944b79645c9d085dc95475e66d2586287f74b44d140e0fb\""
Dec 13 03:46:05.724473 env[1255]: time="2024-12-13T03:46:05.724435727Z" level=info msg="StartContainer for \"2cc9ad308c685740e944b79645c9d085dc95475e66d2586287f74b44d140e0fb\" returns successfully"
Dec 13 03:46:06.139883 kubelet[2205]: I1213 03:46:06.139782    2205 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Dec 13 03:46:06.143413 kubelet[2205]: I1213 03:46:06.143325    2205 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Dec 13 03:46:06.339094 kubelet[2205]: I1213 03:46:06.339066    2205 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-v5c5r" podStartSLOduration=35.216268914 podStartE2EDuration="49.33901615s" podCreationTimestamp="2024-12-13 03:45:17 +0000 UTC" firstStartedPulling="2024-12-13 03:45:51.467407448 +0000 UTC m=+54.246007481" lastFinishedPulling="2024-12-13 03:46:05.590154644 +0000 UTC m=+68.368754717" observedRunningTime="2024-12-13 03:46:06.338175891 +0000 UTC m=+69.116775934" watchObservedRunningTime="2024-12-13 03:46:06.33901615 +0000 UTC m=+69.117616182"
Dec 13 03:46:16.962462 kubelet[2205]: I1213 03:46:16.962408    2205 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Dec 13 03:46:17.708000 audit[4840]: NETFILTER_CFG table=filter:116 family=2 entries=9 op=nft_register_rule pid=4840 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:46:17.712573 kernel: kauditd_printk_skb: 23 callbacks suppressed
Dec 13 03:46:17.750621 kernel: audit: type=1325 audit(1734061577.708:426): table=filter:116 family=2 entries=9 op=nft_register_rule pid=4840 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:46:17.774501 kernel: audit: type=1300 audit(1734061577.708:426): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd4b90eba0 a2=0 a3=7ffd4b90eb8c items=0 ppid=2382 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:17.774610 kernel: audit: type=1327 audit(1734061577.708:426): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:46:17.774711 kernel: audit: type=1325 audit(1734061577.729:427): table=nat:117 family=2 entries=27 op=nft_register_chain pid=4840 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:46:17.774761 kernel: audit: type=1300 audit(1734061577.729:427): arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffd4b90eba0 a2=0 a3=7ffd4b90eb8c items=0 ppid=2382 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:17.774825 kernel: audit: type=1327 audit(1734061577.729:427): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:46:17.708000 audit[4840]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd4b90eba0 a2=0 a3=7ffd4b90eb8c items=0 ppid=2382 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:17.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:46:17.729000 audit[4840]: NETFILTER_CFG table=nat:117 family=2 entries=27 op=nft_register_chain pid=4840 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:46:17.729000 audit[4840]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffd4b90eba0 a2=0 a3=7ffd4b90eb8c items=0 ppid=2382 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:17.729000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:46:19.181009 systemd[1]: run-containerd-runc-k8s.io-adb7c7aca8ced17dd3b8a564c878cdfe4b8986b02ba5ebb010cc5f919d5cef10-runc.iFlbkX.mount: Deactivated successfully.
Dec 13 03:46:23.035164 systemd[1]: Started sshd@7-172.24.4.219:22-172.24.4.1:38772.service.
Dec 13 03:46:23.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.24.4.219:22-172.24.4.1:38772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:23.047465 kernel: audit: type=1130 audit(1734061583.034:428): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.24.4.219:22-172.24.4.1:38772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:24.375000 audit[4864]: USER_ACCT pid=4864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:24.400717 kernel: audit: type=1101 audit(1734061584.375:429): pid=4864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:24.401471 kernel: audit: type=1103 audit(1734061584.396:430): pid=4864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:24.396000 audit[4864]: CRED_ACQ pid=4864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:24.402902 sshd[4864]: Accepted publickey for core from 172.24.4.1 port 38772 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:46:24.423258 kernel: audit: type=1006 audit(1734061584.396:431): pid=4864 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1
Dec 13 03:46:24.423991 kernel: audit: type=1300 audit(1734061584.396:431): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1cdce3d0 a2=3 a3=0 items=0 ppid=1 pid=4864 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:24.396000 audit[4864]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1cdce3d0 a2=3 a3=0 items=0 ppid=1 pid=4864 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:24.436695 kernel: audit: type=1327 audit(1734061584.396:431): proctitle=737368643A20636F7265205B707269765D
Dec 13 03:46:24.396000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:46:24.438036 sshd[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:46:24.482956 systemd[1]: Started session-8.scope.
Dec 13 03:46:24.484019 systemd-logind[1241]: New session 8 of user core.
Dec 13 03:46:24.501000 audit[4864]: USER_START pid=4864 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:24.505000 audit[4867]: CRED_ACQ pid=4867 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:24.524395 kernel: audit: type=1105 audit(1734061584.501:432): pid=4864 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:24.524769 kernel: audit: type=1103 audit(1734061584.505:433): pid=4867 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:26.316968 sshd[4864]: pam_unix(sshd:session): session closed for user core
Dec 13 03:46:26.318000 audit[4864]: USER_END pid=4864 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:26.334397 kernel: audit: type=1106 audit(1734061586.318:434): pid=4864 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:26.318000 audit[4864]: CRED_DISP pid=4864 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:26.336167 systemd-logind[1241]: Session 8 logged out. Waiting for processes to exit.
Dec 13 03:46:26.339413 systemd[1]: sshd@7-172.24.4.219:22-172.24.4.1:38772.service: Deactivated successfully.
Dec 13 03:46:26.340963 systemd[1]: session-8.scope: Deactivated successfully.
Dec 13 03:46:26.344189 systemd-logind[1241]: Removed session 8.
Dec 13 03:46:26.345670 kernel: audit: type=1104 audit(1734061586.318:435): pid=4864 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:26.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.24.4.219:22-172.24.4.1:38772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:31.328700 systemd[1]: Started sshd@8-172.24.4.219:22-172.24.4.1:55408.service.
Dec 13 03:46:31.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.24.4.219:22-172.24.4.1:55408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:31.333688 kernel: kauditd_printk_skb: 1 callbacks suppressed
Dec 13 03:46:31.334634 kernel: audit: type=1130 audit(1734061591.329:437): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.24.4.219:22-172.24.4.1:55408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:32.830000 audit[4884]: USER_ACCT pid=4884 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:32.832576 sshd[4884]: Accepted publickey for core from 172.24.4.1 port 55408 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:46:32.843409 kernel: audit: type=1101 audit(1734061592.830:438): pid=4884 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:32.846000 audit[4884]: CRED_ACQ pid=4884 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:32.849592 sshd[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:46:32.858403 kernel: audit: type=1103 audit(1734061592.846:439): pid=4884 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:32.846000 audit[4884]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5bd496f0 a2=3 a3=0 items=0 ppid=1 pid=4884 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:32.879907 kernel: audit: type=1006 audit(1734061592.846:440): pid=4884 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1
Dec 13 03:46:32.880085 kernel: audit: type=1300 audit(1734061592.846:440): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5bd496f0 a2=3 a3=0 items=0 ppid=1 pid=4884 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:32.880184 kernel: audit: type=1327 audit(1734061592.846:440): proctitle=737368643A20636F7265205B707269765D
Dec 13 03:46:32.846000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:46:32.879725 systemd[1]: Started session-9.scope.
Dec 13 03:46:32.881466 systemd-logind[1241]: New session 9 of user core.
Dec 13 03:46:32.900000 audit[4884]: USER_START pid=4884 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:32.914433 kernel: audit: type=1105 audit(1734061592.900:441): pid=4884 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:32.914000 audit[4887]: CRED_ACQ pid=4887 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:32.925458 kernel: audit: type=1103 audit(1734061592.914:442): pid=4887 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:33.710925 sshd[4884]: pam_unix(sshd:session): session closed for user core
Dec 13 03:46:33.712000 audit[4884]: USER_END pid=4884 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:33.726381 kernel: audit: type=1106 audit(1734061593.712:443): pid=4884 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:33.712000 audit[4884]: CRED_DISP pid=4884 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:33.727613 systemd[1]: sshd@8-172.24.4.219:22-172.24.4.1:55408.service: Deactivated successfully.
Dec 13 03:46:33.729872 systemd[1]: session-9.scope: Deactivated successfully.
Dec 13 03:46:33.737389 kernel: audit: type=1104 audit(1734061593.712:444): pid=4884 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:33.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.24.4.219:22-172.24.4.1:55408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:33.739394 systemd-logind[1241]: Session 9 logged out. Waiting for processes to exit.
Dec 13 03:46:33.741691 systemd-logind[1241]: Removed session 9.
Dec 13 03:46:35.948765 kubelet[2205]: I1213 03:46:35.948711    2205 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Dec 13 03:46:36.061000 audit[4900]: NETFILTER_CFG table=filter:118 family=2 entries=8 op=nft_register_rule pid=4900 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:46:36.061000 audit[4900]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffcb19ad0f0 a2=0 a3=7ffcb19ad0dc items=0 ppid=2382 pid=4900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:36.061000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:46:36.066000 audit[4900]: NETFILTER_CFG table=nat:119 family=2 entries=34 op=nft_register_chain pid=4900 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:46:36.066000 audit[4900]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffcb19ad0f0 a2=0 a3=7ffcb19ad0dc items=0 ppid=2382 pid=4900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:36.066000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:46:38.718198 systemd[1]: Started sshd@9-172.24.4.219:22-172.24.4.1:41064.service.
Dec 13 03:46:38.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.24.4.219:22-172.24.4.1:41064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:38.721317 kernel: kauditd_printk_skb: 7 callbacks suppressed
Dec 13 03:46:38.721477 kernel: audit: type=1130 audit(1734061598.718:448): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.24.4.219:22-172.24.4.1:41064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:40.145000 audit[4922]: USER_ACCT pid=4922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:40.148914 sshd[4922]: Accepted publickey for core from 172.24.4.1 port 41064 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:46:40.162053 kernel: audit: type=1101 audit(1734061600.145:449): pid=4922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:40.162242 kernel: audit: type=1103 audit(1734061600.160:450): pid=4922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:40.160000 audit[4922]: CRED_ACQ pid=4922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:40.179761 kernel: audit: type=1006 audit(1734061600.161:451): pid=4922 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1
Dec 13 03:46:40.180691 sshd[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:46:40.161000 audit[4922]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe75c38c10 a2=3 a3=0 items=0 ppid=1 pid=4922 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:40.161000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:46:40.198182 kernel: audit: type=1300 audit(1734061600.161:451): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe75c38c10 a2=3 a3=0 items=0 ppid=1 pid=4922 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:40.198619 kernel: audit: type=1327 audit(1734061600.161:451): proctitle=737368643A20636F7265205B707269765D
Dec 13 03:46:40.207488 systemd-logind[1241]: New session 10 of user core.
Dec 13 03:46:40.210925 systemd[1]: Started session-10.scope.
Dec 13 03:46:40.225000 audit[4922]: USER_START pid=4922 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:40.232398 kernel: audit: type=1105 audit(1734061600.225:452): pid=4922 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:40.232000 audit[4925]: CRED_ACQ pid=4925 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:40.238401 kernel: audit: type=1103 audit(1734061600.232:453): pid=4925 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:40.903394 sshd[4922]: pam_unix(sshd:session): session closed for user core
Dec 13 03:46:40.904000 audit[4922]: USER_END pid=4922 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:40.910479 kernel: audit: type=1106 audit(1734061600.904:454): pid=4922 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:40.907559 systemd-logind[1241]: Session 10 logged out. Waiting for processes to exit.
Dec 13 03:46:40.908801 systemd[1]: sshd@9-172.24.4.219:22-172.24.4.1:41064.service: Deactivated successfully.
Dec 13 03:46:40.909839 systemd[1]: session-10.scope: Deactivated successfully.
Dec 13 03:46:40.904000 audit[4922]: CRED_DISP pid=4922 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:40.911237 systemd-logind[1241]: Removed session 10.
Dec 13 03:46:40.916889 kernel: audit: type=1104 audit(1734061600.904:455): pid=4922 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:40.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.24.4.219:22-172.24.4.1:41064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:45.913093 systemd[1]: Started sshd@10-172.24.4.219:22-172.24.4.1:35750.service.
Dec 13 03:46:45.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.24.4.219:22-172.24.4.1:35750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:45.917160 kernel: kauditd_printk_skb: 1 callbacks suppressed
Dec 13 03:46:45.917277 kernel: audit: type=1130 audit(1734061605.914:457): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.24.4.219:22-172.24.4.1:35750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:47.252591 sshd[4936]: Accepted publickey for core from 172.24.4.1 port 35750 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:46:47.252000 audit[4936]: USER_ACCT pid=4936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:47.278467 kernel: audit: type=1101 audit(1734061607.252:458): pid=4936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:47.278568 kernel: audit: type=1103 audit(1734061607.264:459): pid=4936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:47.278627 kernel: audit: type=1006 audit(1734061607.264:460): pid=4936 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1
Dec 13 03:46:47.264000 audit[4936]: CRED_ACQ pid=4936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:47.277817 systemd[1]: Started session-11.scope.
Dec 13 03:46:47.265826 sshd[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:46:47.280992 systemd-logind[1241]: New session 11 of user core.
Dec 13 03:46:47.264000 audit[4936]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd82b48350 a2=3 a3=0 items=0 ppid=1 pid=4936 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:47.331109 kernel: audit: type=1300 audit(1734061607.264:460): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd82b48350 a2=3 a3=0 items=0 ppid=1 pid=4936 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:47.331231 kernel: audit: type=1327 audit(1734061607.264:460): proctitle=737368643A20636F7265205B707269765D
Dec 13 03:46:47.331408 kernel: audit: type=1105 audit(1734061607.307:461): pid=4936 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:47.398633 kernel: audit: type=1103 audit(1734061607.310:462): pid=4939 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:47.264000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:46:47.307000 audit[4936]: USER_START pid=4936 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:47.310000 audit[4939]: CRED_ACQ pid=4939 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:48.319474 sshd[4936]: pam_unix(sshd:session): session closed for user core
Dec 13 03:46:48.321000 audit[4936]: USER_END pid=4936 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:48.322000 audit[4936]: CRED_DISP pid=4936 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:48.335037 systemd[1]: sshd@10-172.24.4.219:22-172.24.4.1:35750.service: Deactivated successfully.
Dec 13 03:46:48.338020 systemd[1]: session-11.scope: Deactivated successfully.
Dec 13 03:46:48.343436 kernel: audit: type=1106 audit(1734061608.321:463): pid=4936 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:48.343624 kernel: audit: type=1104 audit(1734061608.322:464): pid=4936 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:48.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.24.4.219:22-172.24.4.1:35750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:48.345056 systemd-logind[1241]: Session 11 logged out. Waiting for processes to exit.
Dec 13 03:46:48.348034 systemd-logind[1241]: Removed session 11.
Dec 13 03:46:53.328398 systemd[1]: Started sshd@11-172.24.4.219:22-172.24.4.1:35762.service.
Dec 13 03:46:53.331738 kernel: kauditd_printk_skb: 1 callbacks suppressed
Dec 13 03:46:53.331815 kernel: audit: type=1130 audit(1734061613.328:466): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.24.4.219:22-172.24.4.1:35762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:53.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.24.4.219:22-172.24.4.1:35762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:54.538000 audit[4971]: USER_ACCT pid=4971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:54.539742 sshd[4971]: Accepted publickey for core from 172.24.4.1 port 35762 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:46:54.550441 kernel: audit: type=1101 audit(1734061614.538:467): pid=4971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:54.550000 audit[4971]: CRED_ACQ pid=4971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:54.551995 sshd[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:46:54.561447 kernel: audit: type=1103 audit(1734061614.550:468): pid=4971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:54.550000 audit[4971]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec0d65c40 a2=3 a3=0 items=0 ppid=1 pid=4971 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:54.579675 kernel: audit: type=1006 audit(1734061614.550:469): pid=4971 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1
Dec 13 03:46:54.580144 kernel: audit: type=1300 audit(1734061614.550:469): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec0d65c40 a2=3 a3=0 items=0 ppid=1 pid=4971 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:54.580244 kernel: audit: type=1327 audit(1734061614.550:469): proctitle=737368643A20636F7265205B707269765D
Dec 13 03:46:54.550000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:46:54.593680 systemd-logind[1241]: New session 12 of user core.
Dec 13 03:46:54.595513 systemd[1]: Started session-12.scope.
Dec 13 03:46:54.607000 audit[4971]: USER_START pid=4971 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:54.622405 kernel: audit: type=1105 audit(1734061614.607:470): pid=4971 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:54.623000 audit[4974]: CRED_ACQ pid=4974 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:54.638489 kernel: audit: type=1103 audit(1734061614.623:471): pid=4974 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:55.685164 sshd[4971]: pam_unix(sshd:session): session closed for user core
Dec 13 03:46:55.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.24.4.219:22-172.24.4.1:51972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:55.689744 systemd[1]: Started sshd@12-172.24.4.219:22-172.24.4.1:51972.service.
Dec 13 03:46:55.707622 kernel: audit: type=1130 audit(1734061615.689:472): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.24.4.219:22-172.24.4.1:51972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:55.707821 kernel: audit: type=1106 audit(1734061615.701:473): pid=4971 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:55.701000 audit[4971]: USER_END pid=4971 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:55.704964 systemd[1]: sshd@11-172.24.4.219:22-172.24.4.1:35762.service: Deactivated successfully.
Dec 13 03:46:55.707197 systemd[1]: session-12.scope: Deactivated successfully.
Dec 13 03:46:55.712563 systemd-logind[1241]: Session 12 logged out. Waiting for processes to exit.
Dec 13 03:46:55.716187 systemd-logind[1241]: Removed session 12.
Dec 13 03:46:55.701000 audit[4971]: CRED_DISP pid=4971 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:55.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.24.4.219:22-172.24.4.1:35762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:57.003000 audit[4983]: USER_ACCT pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:57.003896 sshd[4983]: Accepted publickey for core from 172.24.4.1 port 51972 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:46:57.008000 audit[4983]: CRED_ACQ pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:57.009000 audit[4983]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff97dc8270 a2=3 a3=0 items=0 ppid=1 pid=4983 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:57.009000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:46:57.012290 sshd[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:46:57.030114 systemd-logind[1241]: New session 13 of user core.
Dec 13 03:46:57.033088 systemd[1]: Started session-13.scope.
Dec 13 03:46:57.057000 audit[4983]: USER_START pid=4983 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:57.061000 audit[4988]: CRED_ACQ pid=4988 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:57.988829 sshd[4983]: pam_unix(sshd:session): session closed for user core
Dec 13 03:46:57.990000 audit[4983]: USER_END pid=4983 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:57.990000 audit[4983]: CRED_DISP pid=4983 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:57.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.24.4.219:22-172.24.4.1:51984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:57.992052 systemd[1]: Started sshd@13-172.24.4.219:22-172.24.4.1:51984.service.
Dec 13 03:46:57.993279 systemd[1]: sshd@12-172.24.4.219:22-172.24.4.1:51972.service: Deactivated successfully.
Dec 13 03:46:57.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.24.4.219:22-172.24.4.1:51972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:46:57.994242 systemd[1]: session-13.scope: Deactivated successfully.
Dec 13 03:46:57.997828 systemd-logind[1241]: Session 13 logged out. Waiting for processes to exit.
Dec 13 03:46:58.000315 systemd-logind[1241]: Removed session 13.
Dec 13 03:46:59.408000 audit[4996]: USER_ACCT pid=4996 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:59.409772 sshd[4996]: Accepted publickey for core from 172.24.4.1 port 51984 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:46:59.411465 kernel: kauditd_printk_skb: 13 callbacks suppressed
Dec 13 03:46:59.411668 kernel: audit: type=1101 audit(1734061619.408:485): pid=4996 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:59.424000 audit[4996]: CRED_ACQ pid=4996 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:59.447051 kernel: audit: type=1103 audit(1734061619.424:486): pid=4996 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:59.447330 kernel: audit: type=1006 audit(1734061619.425:487): pid=4996 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1
Dec 13 03:46:59.447481 kernel: audit: type=1300 audit(1734061619.425:487): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee7811e10 a2=3 a3=0 items=0 ppid=1 pid=4996 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:59.425000 audit[4996]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee7811e10 a2=3 a3=0 items=0 ppid=1 pid=4996 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:46:59.448669 sshd[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:46:59.425000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:46:59.467759 kernel: audit: type=1327 audit(1734061619.425:487): proctitle=737368643A20636F7265205B707269765D
Dec 13 03:46:59.490169 systemd-logind[1241]: New session 14 of user core.
Dec 13 03:46:59.493773 systemd[1]: Started session-14.scope.
Dec 13 03:46:59.513000 audit[4996]: USER_START pid=4996 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:59.527377 kernel: audit: type=1105 audit(1734061619.513:488): pid=4996 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:59.528000 audit[5001]: CRED_ACQ pid=5001 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:46:59.534397 kernel: audit: type=1103 audit(1734061619.528:489): pid=5001 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:00.269778 sshd[4996]: pam_unix(sshd:session): session closed for user core
Dec 13 03:47:00.270000 audit[4996]: USER_END pid=4996 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:00.277715 kernel: audit: type=1106 audit(1734061620.270:490): pid=4996 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:00.276703 systemd[1]: sshd@13-172.24.4.219:22-172.24.4.1:51984.service: Deactivated successfully.
Dec 13 03:47:00.278447 systemd[1]: session-14.scope: Deactivated successfully.
Dec 13 03:47:00.278936 systemd-logind[1241]: Session 14 logged out. Waiting for processes to exit.
Dec 13 03:47:00.280128 systemd-logind[1241]: Removed session 14.
Dec 13 03:47:00.271000 audit[4996]: CRED_DISP pid=4996 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:00.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.24.4.219:22-172.24.4.1:51984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:00.289829 kernel: audit: type=1104 audit(1734061620.271:491): pid=4996 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:00.290041 kernel: audit: type=1131 audit(1734061620.276:492): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.24.4.219:22-172.24.4.1:51984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:03.434028 systemd[1]: run-containerd-runc-k8s.io-b2206b54f3264dc55ef360ba9582d8ada02193ba367f1afb978c0fc1cd701356-runc.cqIG4w.mount: Deactivated successfully.
Dec 13 03:47:05.279240 systemd[1]: Started sshd@14-172.24.4.219:22-172.24.4.1:55770.service.
Dec 13 03:47:05.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.24.4.219:22-172.24.4.1:55770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:05.291423 kernel: audit: type=1130 audit(1734061625.280:493): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.24.4.219:22-172.24.4.1:55770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:06.406887 systemd[1]: run-containerd-runc-k8s.io-b2206b54f3264dc55ef360ba9582d8ada02193ba367f1afb978c0fc1cd701356-runc.4mX9GQ.mount: Deactivated successfully.
Dec 13 03:47:06.477000 audit[5035]: USER_ACCT pid=5035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:06.478097 sshd[5035]: Accepted publickey for core from 172.24.4.1 port 55770 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:47:06.482000 audit[5035]: CRED_ACQ pid=5035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:06.484809 sshd[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:47:06.488879 kernel: audit: type=1101 audit(1734061626.477:494): pid=5035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:06.488943 kernel: audit: type=1103 audit(1734061626.482:495): pid=5035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:06.488968 kernel: audit: type=1006 audit(1734061626.482:496): pid=5035 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1
Dec 13 03:47:06.482000 audit[5035]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8d6010e0 a2=3 a3=0 items=0 ppid=1 pid=5035 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:06.502246 kernel: audit: type=1300 audit(1734061626.482:496): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8d6010e0 a2=3 a3=0 items=0 ppid=1 pid=5035 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:06.482000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:47:06.506523 kernel: audit: type=1327 audit(1734061626.482:496): proctitle=737368643A20636F7265205B707269765D
Dec 13 03:47:06.510094 systemd[1]: Started session-15.scope.
Dec 13 03:47:06.510404 systemd-logind[1241]: New session 15 of user core.
Dec 13 03:47:06.530000 audit[5035]: USER_START pid=5035 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:06.539000 audit[5058]: CRED_ACQ pid=5058 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:06.545744 kernel: audit: type=1105 audit(1734061626.530:497): pid=5035 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:06.545803 kernel: audit: type=1103 audit(1734061626.539:498): pid=5058 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:07.125853 sshd[5035]: pam_unix(sshd:session): session closed for user core
Dec 13 03:47:07.127000 audit[5035]: USER_END pid=5035 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:07.131260 systemd[1]: sshd@14-172.24.4.219:22-172.24.4.1:55770.service: Deactivated successfully.
Dec 13 03:47:07.134487 kernel: audit: type=1106 audit(1734061627.127:499): pid=5035 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:07.133145 systemd[1]: session-15.scope: Deactivated successfully.
Dec 13 03:47:07.128000 audit[5035]: CRED_DISP pid=5035 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:07.139484 kernel: audit: type=1104 audit(1734061627.128:500): pid=5035 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:07.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.24.4.219:22-172.24.4.1:55770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:07.139875 systemd-logind[1241]: Session 15 logged out. Waiting for processes to exit.
Dec 13 03:47:07.144001 systemd-logind[1241]: Removed session 15.
Dec 13 03:47:12.134950 systemd[1]: Started sshd@15-172.24.4.219:22-172.24.4.1:55780.service.
Dec 13 03:47:12.138958 kernel: kauditd_printk_skb: 1 callbacks suppressed
Dec 13 03:47:12.139104 kernel: audit: type=1130 audit(1734061632.135:502): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.24.4.219:22-172.24.4.1:55780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:12.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.24.4.219:22-172.24.4.1:55780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:13.500000 audit[5076]: USER_ACCT pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:13.501607 sshd[5076]: Accepted publickey for core from 172.24.4.1 port 55780 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:47:13.514596 kernel: audit: type=1101 audit(1734061633.500:503): pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:13.532935 kernel: audit: type=1103 audit(1734061633.514:504): pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:13.533079 kernel: audit: type=1006 audit(1734061633.514:505): pid=5076 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1
Dec 13 03:47:13.533126 kernel: audit: type=1300 audit(1734061633.514:505): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd311ced0 a2=3 a3=0 items=0 ppid=1 pid=5076 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:13.514000 audit[5076]: CRED_ACQ pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:13.514000 audit[5076]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd311ced0 a2=3 a3=0 items=0 ppid=1 pid=5076 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:13.516089 sshd[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:47:13.514000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:47:13.549662 kernel: audit: type=1327 audit(1734061633.514:505): proctitle=737368643A20636F7265205B707269765D
Dec 13 03:47:13.556473 systemd-logind[1241]: New session 16 of user core.
Dec 13 03:47:13.560646 systemd[1]: Started session-16.scope.
Dec 13 03:47:13.576000 audit[5076]: USER_START pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:13.585354 kernel: audit: type=1105 audit(1734061633.576:506): pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:13.585000 audit[5079]: CRED_ACQ pid=5079 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:13.591350 kernel: audit: type=1103 audit(1734061633.585:507): pid=5079 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:14.293949 sshd[5076]: pam_unix(sshd:session): session closed for user core
Dec 13 03:47:14.300000 audit[5076]: USER_END pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:14.302669 systemd[1]: sshd@15-172.24.4.219:22-172.24.4.1:55780.service: Deactivated successfully.
Dec 13 03:47:14.303599 systemd[1]: session-16.scope: Deactivated successfully.
Dec 13 03:47:14.307368 kernel: audit: type=1106 audit(1734061634.300:508): pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:14.307359 systemd-logind[1241]: Session 16 logged out. Waiting for processes to exit.
Dec 13 03:47:14.300000 audit[5076]: CRED_DISP pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:14.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.24.4.219:22-172.24.4.1:55780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:14.312419 kernel: audit: type=1104 audit(1734061634.300:509): pid=5076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:14.313415 systemd-logind[1241]: Removed session 16.
Dec 13 03:47:19.285672 systemd[1]: Started sshd@16-172.24.4.219:22-172.24.4.1:36114.service.
Dec 13 03:47:19.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.24.4.219:22-172.24.4.1:36114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:19.288783 kernel: kauditd_printk_skb: 1 callbacks suppressed
Dec 13 03:47:19.288852 kernel: audit: type=1130 audit(1734061639.285:511): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.24.4.219:22-172.24.4.1:36114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:20.648000 audit[5113]: USER_ACCT pid=5113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:20.649516 sshd[5113]: Accepted publickey for core from 172.24.4.1 port 36114 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:47:20.655557 kernel: audit: type=1101 audit(1734061640.648:512): pid=5113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:20.655000 audit[5113]: CRED_ACQ pid=5113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:20.657212 sshd[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:47:20.664875 kernel: audit: type=1103 audit(1734061640.655:513): pid=5113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:20.665024 kernel: audit: type=1006 audit(1734061640.656:514): pid=5113 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1
Dec 13 03:47:20.656000 audit[5113]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4d126ac0 a2=3 a3=0 items=0 ppid=1 pid=5113 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:20.670502 kernel: audit: type=1300 audit(1734061640.656:514): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4d126ac0 a2=3 a3=0 items=0 ppid=1 pid=5113 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:20.675631 kernel: audit: type=1327 audit(1734061640.656:514): proctitle=737368643A20636F7265205B707269765D
Dec 13 03:47:20.656000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:47:20.683729 systemd-logind[1241]: New session 17 of user core.
Dec 13 03:47:20.686295 systemd[1]: Started session-17.scope.
Dec 13 03:47:20.712010 kernel: audit: type=1105 audit(1734061640.704:515): pid=5113 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:20.704000 audit[5113]: USER_START pid=5113 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:20.711000 audit[5116]: CRED_ACQ pid=5116 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:20.717350 kernel: audit: type=1103 audit(1734061640.711:516): pid=5116 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:21.488808 sshd[5113]: pam_unix(sshd:session): session closed for user core
Dec 13 03:47:21.507424 kernel: audit: type=1106 audit(1734061641.492:517): pid=5113 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:21.492000 audit[5113]: USER_END pid=5113 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:21.494017 systemd[1]: Started sshd@17-172.24.4.219:22-172.24.4.1:36118.service.
Dec 13 03:47:21.495863 systemd[1]: sshd@16-172.24.4.219:22-172.24.4.1:36114.service: Deactivated successfully.
Dec 13 03:47:21.498445 systemd[1]: session-17.scope: Deactivated successfully.
Dec 13 03:47:21.492000 audit[5113]: CRED_DISP pid=5113 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:21.522375 kernel: audit: type=1104 audit(1734061641.492:518): pid=5113 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:21.522786 systemd-logind[1241]: Session 17 logged out. Waiting for processes to exit.
Dec 13 03:47:21.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.24.4.219:22-172.24.4.1:36118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:21.529823 systemd-logind[1241]: Removed session 17.
Dec 13 03:47:21.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.24.4.219:22-172.24.4.1:36114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:23.002000 audit[5124]: USER_ACCT pid=5124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:23.003654 sshd[5124]: Accepted publickey for core from 172.24.4.1 port 36118 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:47:23.004000 audit[5124]: CRED_ACQ pid=5124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:23.005000 audit[5124]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffedc0a8740 a2=3 a3=0 items=0 ppid=1 pid=5124 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:23.005000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:47:23.006208 sshd[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:47:23.015456 systemd-logind[1241]: New session 18 of user core.
Dec 13 03:47:23.017418 systemd[1]: Started session-18.scope.
Dec 13 03:47:23.033000 audit[5124]: USER_START pid=5124 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:23.036000 audit[5129]: CRED_ACQ pid=5129 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:24.404139 kernel: kauditd_printk_skb: 9 callbacks suppressed
Dec 13 03:47:24.404425 kernel: audit: type=1130 audit(1734061644.396:526): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.24.4.219:22-172.24.4.1:36126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:24.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.24.4.219:22-172.24.4.1:36126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:24.396509 systemd[1]: Started sshd@18-172.24.4.219:22-172.24.4.1:36126.service.
Dec 13 03:47:24.416005 sshd[5124]: pam_unix(sshd:session): session closed for user core
Dec 13 03:47:24.428000 audit[5124]: USER_END pid=5124 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:24.443649 systemd[1]: sshd@17-172.24.4.219:22-172.24.4.1:36118.service: Deactivated successfully.
Dec 13 03:47:24.445796 systemd[1]: session-18.scope: Deactivated successfully.
Dec 13 03:47:24.446960 systemd-logind[1241]: Session 18 logged out. Waiting for processes to exit.
Dec 13 03:47:24.451658 kernel: audit: type=1106 audit(1734061644.428:527): pid=5124 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:24.431000 audit[5124]: CRED_DISP pid=5124 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:24.453956 systemd-logind[1241]: Removed session 18.
Dec 13 03:47:24.469313 kernel: audit: type=1104 audit(1734061644.431:528): pid=5124 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:24.469484 kernel: audit: type=1131 audit(1734061644.443:529): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.24.4.219:22-172.24.4.1:36118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:24.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.24.4.219:22-172.24.4.1:36118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:25.808000 audit[5135]: USER_ACCT pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:25.809605 sshd[5135]: Accepted publickey for core from 172.24.4.1 port 36126 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:47:25.817000 audit[5135]: CRED_ACQ pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:25.819639 sshd[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:47:25.825536 kernel: audit: type=1101 audit(1734061645.808:530): pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:25.825698 kernel: audit: type=1103 audit(1734061645.817:531): pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:25.826943 kernel: audit: type=1006 audit(1734061645.817:532): pid=5135 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1
Dec 13 03:47:25.817000 audit[5135]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd0c236b0 a2=3 a3=0 items=0 ppid=1 pid=5135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:25.839757 systemd[1]: Started session-19.scope.
Dec 13 03:47:25.841196 systemd-logind[1241]: New session 19 of user core.
Dec 13 03:47:25.844396 kernel: audit: type=1300 audit(1734061645.817:532): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd0c236b0 a2=3 a3=0 items=0 ppid=1 pid=5135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:25.817000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:47:25.852559 kernel: audit: type=1327 audit(1734061645.817:532): proctitle=737368643A20636F7265205B707269765D
Dec 13 03:47:25.860000 audit[5135]: USER_START pid=5135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:25.873377 kernel: audit: type=1105 audit(1734061645.860:533): pid=5135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:25.863000 audit[5140]: CRED_ACQ pid=5140 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:28.796000 audit[5166]: NETFILTER_CFG table=filter:120 family=2 entries=20 op=nft_register_rule pid=5166 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:47:28.796000 audit[5166]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffef4be09f0 a2=0 a3=7ffef4be09dc items=0 ppid=2382 pid=5166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:28.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:47:28.803000 audit[5166]: NETFILTER_CFG table=nat:121 family=2 entries=22 op=nft_register_rule pid=5166 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:47:28.803000 audit[5166]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffef4be09f0 a2=0 a3=0 items=0 ppid=2382 pid=5166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:28.803000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:47:28.821000 audit[5168]: NETFILTER_CFG table=filter:122 family=2 entries=32 op=nft_register_rule pid=5168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:47:28.821000 audit[5168]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fffff0ebe60 a2=0 a3=7fffff0ebe4c items=0 ppid=2382 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:28.821000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:47:28.826000 audit[5168]: NETFILTER_CFG table=nat:123 family=2 entries=22 op=nft_register_rule pid=5168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:47:28.826000 audit[5168]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fffff0ebe60 a2=0 a3=0 items=0 ppid=2382 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:28.826000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:47:29.058096 sshd[5135]: pam_unix(sshd:session): session closed for user core
Dec 13 03:47:29.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.24.4.219:22-172.24.4.1:57964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:29.061759 systemd[1]: Started sshd@19-172.24.4.219:22-172.24.4.1:57964.service.
Dec 13 03:47:29.069000 audit[5135]: USER_END pid=5135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:29.069000 audit[5135]: CRED_DISP pid=5135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:29.071891 systemd[1]: sshd@18-172.24.4.219:22-172.24.4.1:36126.service: Deactivated successfully.
Dec 13 03:47:29.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.24.4.219:22-172.24.4.1:36126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:29.074175 systemd[1]: session-19.scope: Deactivated successfully.
Dec 13 03:47:29.074590 systemd-logind[1241]: Session 19 logged out. Waiting for processes to exit.
Dec 13 03:47:29.081800 systemd-logind[1241]: Removed session 19.
Dec 13 03:47:30.142000 audit[5169]: USER_ACCT pid=5169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:30.144556 sshd[5169]: Accepted publickey for core from 172.24.4.1 port 57964 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:47:30.145798 kernel: kauditd_printk_skb: 17 callbacks suppressed
Dec 13 03:47:30.146710 kernel: audit: type=1101 audit(1734061650.142:543): pid=5169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:30.156000 audit[5169]: CRED_ACQ pid=5169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:30.160831 sshd[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:47:30.167667 kernel: audit: type=1103 audit(1734061650.156:544): pid=5169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:30.168725 kernel: audit: type=1006 audit(1734061650.156:545): pid=5169 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1
Dec 13 03:47:30.156000 audit[5169]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe404f460 a2=3 a3=0 items=0 ppid=1 pid=5169 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:30.156000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:47:30.191318 kernel: audit: type=1300 audit(1734061650.156:545): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe404f460 a2=3 a3=0 items=0 ppid=1 pid=5169 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:30.191492 kernel: audit: type=1327 audit(1734061650.156:545): proctitle=737368643A20636F7265205B707269765D
Dec 13 03:47:30.196680 systemd-logind[1241]: New session 20 of user core.
Dec 13 03:47:30.198603 systemd[1]: Started session-20.scope.
Dec 13 03:47:30.216000 audit[5169]: USER_START pid=5169 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:30.229514 kernel: audit: type=1105 audit(1734061650.216:546): pid=5169 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:30.230000 audit[5174]: CRED_ACQ pid=5174 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:30.241421 kernel: audit: type=1103 audit(1734061650.230:547): pid=5174 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:31.904656 sshd[5169]: pam_unix(sshd:session): session closed for user core
Dec 13 03:47:31.909000 audit[5169]: USER_END pid=5169 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:31.912972 systemd[1]: Started sshd@20-172.24.4.219:22-172.24.4.1:57968.service.
Dec 13 03:47:31.914907 systemd[1]: sshd@19-172.24.4.219:22-172.24.4.1:57964.service: Deactivated successfully.
Dec 13 03:47:31.917459 systemd[1]: session-20.scope: Deactivated successfully.
Dec 13 03:47:31.923420 kernel: audit: type=1106 audit(1734061651.909:548): pid=5169 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:31.987534 kernel: audit: type=1104 audit(1734061651.909:549): pid=5169 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:31.996442 kernel: audit: type=1130 audit(1734061651.913:550): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.24.4.219:22-172.24.4.1:57968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:31.909000 audit[5169]: CRED_DISP pid=5169 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:31.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.24.4.219:22-172.24.4.1:57968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:31.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.24.4.219:22-172.24.4.1:57964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:31.942776 systemd-logind[1241]: Session 20 logged out. Waiting for processes to exit.
Dec 13 03:47:31.954644 systemd-logind[1241]: Removed session 20.
Dec 13 03:47:33.230000 audit[5181]: USER_ACCT pid=5181 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:33.232000 audit[5181]: CRED_ACQ pid=5181 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:33.232000 audit[5181]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4132a010 a2=3 a3=0 items=0 ppid=1 pid=5181 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:33.232000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:47:33.249104 sshd[5181]: Accepted publickey for core from 172.24.4.1 port 57968 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:47:33.249080 sshd[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:47:33.270967 systemd-logind[1241]: New session 21 of user core.
Dec 13 03:47:33.272430 systemd[1]: Started session-21.scope.
Dec 13 03:47:33.285000 audit[5181]: USER_START pid=5181 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:33.289000 audit[5186]: CRED_ACQ pid=5186 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:34.165877 sshd[5181]: pam_unix(sshd:session): session closed for user core
Dec 13 03:47:34.167000 audit[5181]: USER_END pid=5181 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:34.168000 audit[5181]: CRED_DISP pid=5181 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:34.171195 systemd[1]: sshd@20-172.24.4.219:22-172.24.4.1:57968.service: Deactivated successfully.
Dec 13 03:47:34.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.24.4.219:22-172.24.4.1:57968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:34.173645 systemd[1]: session-21.scope: Deactivated successfully.
Dec 13 03:47:34.173748 systemd-logind[1241]: Session 21 logged out. Waiting for processes to exit.
Dec 13 03:47:34.177029 systemd-logind[1241]: Removed session 21.
Dec 13 03:47:37.770000 audit[5217]: NETFILTER_CFG table=filter:124 family=2 entries=20 op=nft_register_rule pid=5217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:47:37.773718 kernel: kauditd_printk_skb: 11 callbacks suppressed
Dec 13 03:47:37.773860 kernel: audit: type=1325 audit(1734061657.770:560): table=filter:124 family=2 entries=20 op=nft_register_rule pid=5217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:47:37.770000 audit[5217]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc5e79fa10 a2=0 a3=7ffc5e79f9fc items=0 ppid=2382 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:37.770000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:47:37.800119 kernel: audit: type=1300 audit(1734061657.770:560): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc5e79fa10 a2=0 a3=7ffc5e79f9fc items=0 ppid=2382 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:37.800212 kernel: audit: type=1327 audit(1734061657.770:560): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:47:37.800278 kernel: audit: type=1325 audit(1734061657.784:561): table=nat:125 family=2 entries=106 op=nft_register_chain pid=5217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:47:37.784000 audit[5217]: NETFILTER_CFG table=nat:125 family=2 entries=106 op=nft_register_chain pid=5217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Dec 13 03:47:37.784000 audit[5217]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffc5e79fa10 a2=0 a3=7ffc5e79f9fc items=0 ppid=2382 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:37.811966 kernel: audit: type=1300 audit(1734061657.784:561): arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffc5e79fa10 a2=0 a3=7ffc5e79f9fc items=0 ppid=2382 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:37.812942 kernel: audit: type=1327 audit(1734061657.784:561): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:47:37.784000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Dec 13 03:47:39.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.24.4.219:22-172.24.4.1:43322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:39.172655 systemd[1]: Started sshd@21-172.24.4.219:22-172.24.4.1:43322.service.
Dec 13 03:47:39.183390 kernel: audit: type=1130 audit(1734061659.172:562): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.24.4.219:22-172.24.4.1:43322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:40.467000 audit[5221]: USER_ACCT pid=5221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:40.469896 sshd[5221]: Accepted publickey for core from 172.24.4.1 port 43322 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:47:40.478000 audit[5221]: CRED_ACQ pid=5221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:40.489416 kernel: audit: type=1101 audit(1734061660.467:563): pid=5221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:40.490266 kernel: audit: type=1103 audit(1734061660.478:564): pid=5221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:40.490414 kernel: audit: type=1006 audit(1734061660.479:565): pid=5221 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1
Dec 13 03:47:40.479000 audit[5221]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff39945990 a2=3 a3=0 items=0 ppid=1 pid=5221 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:40.479000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:47:40.497076 sshd[5221]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:47:40.509307 systemd[1]: Started session-22.scope.
Dec 13 03:47:40.509996 systemd-logind[1241]: New session 22 of user core.
Dec 13 03:47:40.526000 audit[5221]: USER_START pid=5221 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:40.530000 audit[5224]: CRED_ACQ pid=5224 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:41.352571 sshd[5221]: pam_unix(sshd:session): session closed for user core
Dec 13 03:47:41.353000 audit[5221]: USER_END pid=5221 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:41.353000 audit[5221]: CRED_DISP pid=5221 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:41.355309 systemd[1]: sshd@21-172.24.4.219:22-172.24.4.1:43322.service: Deactivated successfully.
Dec 13 03:47:41.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.24.4.219:22-172.24.4.1:43322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:41.356533 systemd[1]: session-22.scope: Deactivated successfully.
Dec 13 03:47:41.358421 systemd-logind[1241]: Session 22 logged out. Waiting for processes to exit.
Dec 13 03:47:41.360922 systemd-logind[1241]: Removed session 22.
Dec 13 03:47:46.365942 systemd[1]: Started sshd@22-172.24.4.219:22-172.24.4.1:35134.service.
Dec 13 03:47:46.379699 kernel: kauditd_printk_skb: 7 callbacks suppressed
Dec 13 03:47:46.379840 kernel: audit: type=1130 audit(1734061666.366:571): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.24.4.219:22-172.24.4.1:35134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:46.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.24.4.219:22-172.24.4.1:35134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:47.844000 audit[5234]: USER_ACCT pid=5234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:47.856443 kernel: audit: type=1101 audit(1734061667.844:572): pid=5234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:47.856528 sshd[5234]: Accepted publickey for core from 172.24.4.1 port 35134 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:47:47.868437 kernel: audit: type=1103 audit(1734061667.856:573): pid=5234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:47.856000 audit[5234]: CRED_ACQ pid=5234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:47.858381 sshd[5234]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:47:47.877425 kernel: audit: type=1006 audit(1734061667.856:574): pid=5234 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1
Dec 13 03:47:47.856000 audit[5234]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd23696ea0 a2=3 a3=0 items=0 ppid=1 pid=5234 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:47.885283 systemd[1]: Started session-23.scope.
Dec 13 03:47:47.887103 systemd-logind[1241]: New session 23 of user core.
Dec 13 03:47:47.856000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:47:47.893388 kernel: audit: type=1300 audit(1734061667.856:574): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd23696ea0 a2=3 a3=0 items=0 ppid=1 pid=5234 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:47.893568 kernel: audit: type=1327 audit(1734061667.856:574): proctitle=737368643A20636F7265205B707269765D
Dec 13 03:47:47.904000 audit[5234]: USER_START pid=5234 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:47.908000 audit[5237]: CRED_ACQ pid=5237 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:47.923908 kernel: audit: type=1105 audit(1734061667.904:575): pid=5234 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:47.924056 kernel: audit: type=1103 audit(1734061667.908:576): pid=5237 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:48.611553 sshd[5234]: pam_unix(sshd:session): session closed for user core
Dec 13 03:47:48.613000 audit[5234]: USER_END pid=5234 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:48.616867 systemd-logind[1241]: Session 23 logged out. Waiting for processes to exit.
Dec 13 03:47:48.619169 systemd[1]: sshd@22-172.24.4.219:22-172.24.4.1:35134.service: Deactivated successfully.
Dec 13 03:47:48.627473 kernel: audit: type=1106 audit(1734061668.613:577): pid=5234 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:48.627567 kernel: audit: type=1104 audit(1734061668.613:578): pid=5234 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:48.613000 audit[5234]: CRED_DISP pid=5234 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:48.623314 systemd[1]: session-23.scope: Deactivated successfully.
Dec 13 03:47:48.625941 systemd-logind[1241]: Removed session 23.
Dec 13 03:47:48.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.24.4.219:22-172.24.4.1:35134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:49.189883 systemd[1]: run-containerd-runc-k8s.io-adb7c7aca8ced17dd3b8a564c878cdfe4b8986b02ba5ebb010cc5f919d5cef10-runc.4sjbNp.mount: Deactivated successfully.
Dec 13 03:47:53.624061 kernel: kauditd_printk_skb: 1 callbacks suppressed
Dec 13 03:47:53.632006 kernel: audit: type=1130 audit(1734061673.619:580): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.24.4.219:22-172.24.4.1:35140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:53.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.24.4.219:22-172.24.4.1:35140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:53.619047 systemd[1]: Started sshd@23-172.24.4.219:22-172.24.4.1:35140.service.
Dec 13 03:47:54.906000 audit[5269]: USER_ACCT pid=5269 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:54.909126 sshd[5269]: Accepted publickey for core from 172.24.4.1 port 35140 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:47:54.920000 audit[5269]: CRED_ACQ pid=5269 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:54.922489 sshd[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:47:54.933587 kernel: audit: type=1101 audit(1734061674.906:581): pid=5269 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:54.933957 kernel: audit: type=1103 audit(1734061674.920:582): pid=5269 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:54.934629 kernel: audit: type=1006 audit(1734061674.920:583): pid=5269 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1
Dec 13 03:47:54.920000 audit[5269]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9d7ab130 a2=3 a3=0 items=0 ppid=1 pid=5269 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:54.949813 systemd[1]: Started session-24.scope.
Dec 13 03:47:54.952329 systemd-logind[1241]: New session 24 of user core.
Dec 13 03:47:54.920000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:47:54.957209 kernel: audit: type=1300 audit(1734061674.920:583): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9d7ab130 a2=3 a3=0 items=0 ppid=1 pid=5269 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:47:54.957384 kernel: audit: type=1327 audit(1734061674.920:583): proctitle=737368643A20636F7265205B707269765D
Dec 13 03:47:54.970000 audit[5269]: USER_START pid=5269 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:54.983729 kernel: audit: type=1105 audit(1734061674.970:584): pid=5269 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:54.983000 audit[5272]: CRED_ACQ pid=5272 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:54.994443 kernel: audit: type=1103 audit(1734061674.983:585): pid=5272 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:55.713591 sshd[5269]: pam_unix(sshd:session): session closed for user core
Dec 13 03:47:55.714000 audit[5269]: USER_END pid=5269 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:55.715000 audit[5269]: CRED_DISP pid=5269 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:55.721202 systemd[1]: sshd@23-172.24.4.219:22-172.24.4.1:35140.service: Deactivated successfully.
Dec 13 03:47:55.723011 systemd[1]: session-24.scope: Deactivated successfully.
Dec 13 03:47:55.724357 kernel: audit: type=1106 audit(1734061675.714:586): pid=5269 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:55.724550 kernel: audit: type=1104 audit(1734061675.715:587): pid=5269 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:47:55.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.24.4.219:22-172.24.4.1:35140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:47:55.725266 systemd-logind[1241]: Session 24 logged out. Waiting for processes to exit.
Dec 13 03:47:55.726236 systemd-logind[1241]: Removed session 24.
Dec 13 03:48:00.721589 systemd[1]: Started sshd@24-172.24.4.219:22-172.24.4.1:44084.service.
Dec 13 03:48:00.725571 kernel: kauditd_printk_skb: 1 callbacks suppressed
Dec 13 03:48:00.725664 kernel: audit: type=1130 audit(1734061680.721:589): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.24.4.219:22-172.24.4.1:44084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:48:00.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.24.4.219:22-172.24.4.1:44084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:48:01.896000 audit[5284]: USER_ACCT pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:01.897520 sshd[5284]: Accepted publickey for core from 172.24.4.1 port 44084 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:48:01.907405 kernel: audit: type=1101 audit(1734061681.896:590): pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:01.907000 audit[5284]: CRED_ACQ pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:01.908894 sshd[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:48:01.918399 kernel: audit: type=1103 audit(1734061681.907:591): pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:01.918522 kernel: audit: type=1006 audit(1734061681.907:592): pid=5284 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1
Dec 13 03:48:01.907000 audit[5284]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecb836090 a2=3 a3=0 items=0 ppid=1 pid=5284 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:48:01.936011 kernel: audit: type=1300 audit(1734061681.907:592): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecb836090 a2=3 a3=0 items=0 ppid=1 pid=5284 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:48:01.940767 kernel: audit: type=1327 audit(1734061681.907:592): proctitle=737368643A20636F7265205B707269765D
Dec 13 03:48:01.907000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:48:01.945459 systemd-logind[1241]: New session 25 of user core.
Dec 13 03:48:01.947875 systemd[1]: Started session-25.scope.
Dec 13 03:48:01.964000 audit[5284]: USER_START pid=5284 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:01.971000 audit[5287]: CRED_ACQ pid=5287 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:01.987733 kernel: audit: type=1105 audit(1734061681.964:593): pid=5284 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:01.987904 kernel: audit: type=1103 audit(1734061681.971:594): pid=5287 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:02.745617 sshd[5284]: pam_unix(sshd:session): session closed for user core
Dec 13 03:48:02.747000 audit[5284]: USER_END pid=5284 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:02.754890 systemd[1]: sshd@24-172.24.4.219:22-172.24.4.1:44084.service: Deactivated successfully.
Dec 13 03:48:02.755380 kernel: audit: type=1106 audit(1734061682.747:595): pid=5284 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:02.756411 systemd[1]: session-25.scope: Deactivated successfully.
Dec 13 03:48:02.747000 audit[5284]: CRED_DISP pid=5284 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:02.756799 systemd-logind[1241]: Session 25 logged out. Waiting for processes to exit.
Dec 13 03:48:02.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.24.4.219:22-172.24.4.1:44084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:48:02.762768 kernel: audit: type=1104 audit(1734061682.747:596): pid=5284 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:02.763069 systemd-logind[1241]: Removed session 25.
Dec 13 03:48:07.754602 systemd[1]: Started sshd@25-172.24.4.219:22-172.24.4.1:39548.service.
Dec 13 03:48:07.757918 kernel: kauditd_printk_skb: 1 callbacks suppressed
Dec 13 03:48:07.758053 kernel: audit: type=1130 audit(1734061687.754:598): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.24.4.219:22-172.24.4.1:39548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:48:07.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.24.4.219:22-172.24.4.1:39548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:48:09.252000 audit[5336]: USER_ACCT pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:09.252930 sshd[5336]: Accepted publickey for core from 172.24.4.1 port 39548 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk
Dec 13 03:48:09.264417 kernel: audit: type=1101 audit(1734061689.252:599): pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:09.263000 audit[5336]: CRED_ACQ pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:09.275131 sshd[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 03:48:09.276691 kernel: audit: type=1103 audit(1734061689.263:600): pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:09.276769 kernel: audit: type=1006 audit(1734061689.263:601): pid=5336 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1
Dec 13 03:48:09.263000 audit[5336]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff55a829a0 a2=3 a3=0 items=0 ppid=1 pid=5336 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:48:09.294387 kernel: audit: type=1300 audit(1734061689.263:601): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff55a829a0 a2=3 a3=0 items=0 ppid=1 pid=5336 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 03:48:09.263000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Dec 13 03:48:09.300069 kernel: audit: type=1327 audit(1734061689.263:601): proctitle=737368643A20636F7265205B707269765D
Dec 13 03:48:09.307317 systemd-logind[1241]: New session 26 of user core.
Dec 13 03:48:09.308511 systemd[1]: Started session-26.scope.
Dec 13 03:48:09.321000 audit[5336]: USER_START pid=5336 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:09.334649 kernel: audit: type=1105 audit(1734061689.321:602): pid=5336 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:09.335000 audit[5341]: CRED_ACQ pid=5341 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:09.346381 kernel: audit: type=1103 audit(1734061689.335:603): pid=5341 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:10.038161 sshd[5336]: pam_unix(sshd:session): session closed for user core
Dec 13 03:48:10.040000 audit[5336]: USER_END pid=5336 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:10.040000 audit[5336]: CRED_DISP pid=5336 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:10.062422 kernel: audit: type=1106 audit(1734061690.040:604): pid=5336 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:10.064593 kernel: audit: type=1104 audit(1734061690.040:605): pid=5336 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success'
Dec 13 03:48:10.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.24.4.219:22-172.24.4.1:39548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 03:48:10.063629 systemd[1]: sshd@25-172.24.4.219:22-172.24.4.1:39548.service: Deactivated successfully.
Dec 13 03:48:10.067822 systemd[1]: session-26.scope: Deactivated successfully.
Dec 13 03:48:10.069896 systemd-logind[1241]: Session 26 logged out. Waiting for processes to exit.
Dec 13 03:48:10.071900 systemd-logind[1241]: Removed session 26.